Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun
From: Christopher Samuel (samuel_at_[hidden])
Date: 2013-09-03 21:13:41


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 03/09/13 10:56, Ralph Castain wrote:

> Yeah - --with-pmi=<path-to-pmi.h>

Actually I found that just --with-pmi=/usr/local/slurm/latest worked. :-)

I've got some initial numbers for 64 cores, as I mentioned the system
I found this on initially is so busy at the moment I won't be able to
run anything bigger for a while, so I'm going to move my testing to
another system which is a bit quieter, but slower (it's Nehalem vs
SandyBridge).

All the below tests are with the same NAMD 2.9 binary and within the
same Slurm job so it runs on the same cores each time. It's nice to
find that C code at least seems to be backwardly compatible!

64 cores over 18 nodes:

Open-MPI 1.6.5 with mpirun - 7842 seconds
Open-MPI 1.7.3a1r29103 with srun - 7522 seconds

so that's about a 4% speedup.

64 cores over 10 nodes:

Open-MPI 1.7.3a1r29103 with mpirun - 8341 seconds
Open-MPI 1.7.3a1r29103 with srun - 7476 seconds

So that's about 11% faster, and the mpirun speed has decreased though
of course that's built using PMI so perhaps that's the cause?

cheers,
Chris
- --
 Christopher Samuel Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/ http://twitter.com/vlsci

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEUEARECAAYFAlImiUUACgkQO2KABBYQAh+WvwCeM1ufCWvK627oz8aBbgKjfONe
cDEAmM3w+/EJ0unbmaetNR3ay4U6nrM=
=v/PT
-----END PGP SIGNATURE-----