This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Feb 27, 2012, at 4:42 AM, Paul Kapinos wrote:
> Dear Open MPI developer,
> are enlisted four envvars Open MPI set for every process. We use they for some scripting and thank you for providing they.
> But simple "mpiexec -np 1 env | grep OMPI" brings lotz more enviers.
Yes, we set quite a few more, but those are intended solely for internal use and are not guaranteed. The list on the web site only identifies a set that are guaranteed to be provided.
> These are interesting for us:
> 1) OMPI_COMM_WORLD_LOCAL_SIZE - seem to contain the number of processes which are running on the specific node, see also
> Is this envvar also "stable" as OMPI_COMM_WORLD_LOCAL_RANK is? (This would make sense as it looks like the OMPI_COMM_WORLD_SIZE, OMPI_COMM_WORLD_RANK pair.)
Yes, and I'll add it to the page
> If yes, maybe it also should be documented in the Wiki page.
> 2) OMPI_COMM_WORLD_NODE_RANK - is that just a double of OMPI_COMM_WORLD_LOCAL_RANK ?
No - the "local rank" is your rank on the node within your own job. The "node rank" is your rank on the node overall. The two differ when you do a comm_spawn. For example, suppose you have two ranks from your initial job on a node, and then comm_spawn three additional ranks. Their values would look like this:
job/rank local_rank node_rank
0/0 0 0
0/1 1 1
1/0 0 2
1/1 1 3
1/2 2 4
Again, I'll add it to the page
> Best wishes,
> Paul Kapinos
> Dipl.-Inform. Paul Kapinos - High Performance Computing,
> RWTH Aachen University, Center for Computing and Communication
> Seffenter Weg 23, D 52074 Aachen (Germany)
> Tel: +49 241/80-24915
> users mailing list