Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] RFC: job size info in OPAL
From: Ralph Castain (rhc_at_[hidden])
Date: 2014-07-30 20:37:49

On Jul 30, 2014, at 5:25 PM, George Bosilca <bosilca_at_[hidden]> wrote:

> On Jul 30, 2014, at 18:00 , Jeff Squyres (jsquyres) <jsquyres_at_[hidden]> wrote:
>> WHAT: Should we make the job size (i.e., initial number of procs) available in OPAL?
>> WHY: At least 2 BTLs are using this info (*more below)
>> WHERE: usnic and ugni
>> TIMEOUT: there's already been some inflammatory emails about this; let's discuss next Tuesday on the teleconf: Tue, 5 Aug 2014
>> This is an open question. We *have* the information at the time that the BTLs are initialized: do we allow that information to go down to OPAL?
>> Ralph added this info down in OPAL in r32355, but George reverted it in r32361.
>> Points for: YES, WE SHOULD
>> +++ 2 BTLs were using it (usinc, ugni)
>> +++ Other RTE job-related info are already in OPAL (num local ranks, local rank)
>> Points for: NO, WE SHOULD NOT
>> --- What exactly is this number (e.g., num currently-connected procs?), and when is it updated?
>> --- We need to precisely delineate what belongs in OPAL vs. above-OPAL
> --- Using this information to configure the communication environment limits the scope of communication substrate to a static application (in number of participants). Under this assumption, one can simply wait until the first add_proc to compute the number of processes, solution as “correct” as the current one.

Not necessarily - it depends on how it is used, and how it is communicated. Some of us have explored other options for using this that aren't static, but where the info is of use.

> The other “global” information that were made available in OPAL (num_local_peers and my_local_rank) are only used by local BTL (SM, SMCUDA and VADER). Moreover, my_local_rank is only used to decide who initialize the backend file, thing that can easily be done using an atomic operation. The number of local processes is used to prevent SM from activating itself if we don’t have at least 2 processes per node. So, their usage is minimally invasive, and can eventually be phased out with a little effort.

FWIW: the new PMI abstraction is in OPAL because it is RTE-agnostic. So all the info being discussed will actually be captured originally in the OPAL layer, and stored in the OPAL dstore framework. In the current code, the RTE grabs the data and exposes it to the OMPI layer, which then pushes it back down to the OPAL proc.h struct.

<shrug> since anyone can freely query the info from opal/pmix or opal/dstore, it is really irrelevant in some ways. The info is there, in the OPAL layer, prior to BTL's being initialized. If you don't want it in a global storage, people can just get it from the appropriate OPAL API.

So what are we actually debating here? Global storage vs API call?

> George.
>> FWIW: here's how ompi_process_info.num_procs was used before the BTL move down to OPAL:
>> - usnic: for a minor latency optimization / sizing of a shared receive buffer queue length, and for the initial size of a peer lookup hash
>> - ugni: to determine the size of the per-peer buffers used for send/recv communication
>> --
>> Jeff Squyres
>> jsquyres_at_[hidden]
>> For corporate legal information go to:
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> Subscription:
>> Link to this post:
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> Subscription:
> Link to this post: