Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun
From: Christopher Samuel (samuel_at_[hidden])
Date: 2013-09-05 00:34:51

Hash: SHA1

Hi Ralph,

On 05/09/13 12:50, Ralph Castain wrote:

> Jeff and I were looking at a similar issue today and suddenly
> realized that the mappings were different - i.e., what ranks are
> on what nodes differs depending on how you launch. You might want
> to check if that's the issue here as well. Just launch the
> attached program using mpirun vs srun and check to see if the maps
> are the same or not.

Very interesting, the ranks to node mappings are identical in all
cases (mpirun and srun for 1.6.5 and my test 1.7.3 snapshot) but what
is different is as follows.

For the 1.6.5 build I see mpirun report:

number 0 universe size 64 universe envar 64

whereas srun report:

number 1 universe size 64 universe envar NULL

For the 1.7.3 snapshot both report "number 0" so the only difference
there is that mpirun has:

envar 64

whereas srun has:

envar NULL

Are these differences significant?

I'm intrigued that the problem child (srun 1.6.5) is the only one
where number is 1.

All the best,
- --
 Christopher Samuel Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545

Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -