For those not following the user list, this request was generated today:
>>>> Absolutely, these are useful time and time again so should be part of
>>>> the API and hence stable. Care to mention what they are and I'll add it
>>>> to my note as something to change when upgrading to 1.3 (we are looking
>>>> at testing a snapshot in the near future).
>>> OMPI_COMM_WORLD_SIZE #procs in the job
>>> OMPI_COMM_WORLD_LOCAL_SIZE #procs in this job that are sharing the node
>>> OMPI_UNIVERSE_SIZE total #slots allocated to this user
>>> (across all nodes)
>>> OMPI_COMM_WORLD_RANK proc's rank
>>> OMPI_COMM_WORLD_LOCAL_RANK local rank on node - lowest rank'd proc on
>>> the node is given local_rank=0
>>> If there are others that would be useful, now is definitely the time to
>>> speak up!
>> The only other one I'd like to see is some kind of global identifier for
>> the job but as far as I can see I don't believe that openmpi has such a
> Not really - of course, many environments have a jobid they assign at time
> of allocation. We could create a unified identifier from that to ensure a
> consistent name was always available, but the problem would be that not all
> environments provide it (e.g., rsh). To guarantee that the variable would
> always be there, we would have to make something up in those cases.
I could easily create such an envar, even for non-managed environments. The
plan would be to use the RM-provided jobid where one was available, and to
use the mpirun jobid where not.
My thought was to call it OMPI_JOB_ID, unless someone has another
Any objection to my doing so, and/or suggestions on alternative