On Fri, 2008-07-11 at 08:01 -0600, Ralph H Castain wrote:
> >> I believe this is partly what motivated the creation of the MPI envars - to
> >> create a vehicle that -would- be guaranteed stable for just these purposes.
> >> The concern was that users were doing things that accessed internal envars
> >> which we changed from version to version. The new envars will remain fixed.
> > Absolutely, these are useful time and time again so should be part of
> > the API and hence stable. Care to mention what they are and I'll add it
> > to my note as something to change when upgrading to 1.3 (we are looking
> > at testing a snapshot in the near future).
> OMPI_COMM_WORLD_SIZE #procs in the job
> OMPI_COMM_WORLD_LOCAL_SIZE #procs in this job that are sharing the node
> OMPI_UNIVERSE_SIZE total #slots allocated to this user
> (across all nodes)
> OMPI_COMM_WORLD_RANK proc's rank
> OMPI_COMM_WORLD_LOCAL_RANK local rank on node - lowest rank'd proc on
> the node is given local_rank=0
> If there are others that would be useful, now is definitely the time to
> speak up!
The only other one I'd like to see is some kind of global identifier for
the job but as far as I can see I don't believe that openmpi has such a