Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: Force Slurm to use PMI-1 unless PMI-2 is specifically requested
From: Christopher Samuel (samuel_at_[hidden])
Date: 2014-05-07 21:51:48


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/05/14 18:00, Ralph Castain wrote:

> Interesting - how many nodes were involved? As I said, the bad
> scaling becomes more evident at a fairly high node count.

Our x86-64 systems are low node counts (we've got BG/Q for capacity),
the cluster that those tests were run on has 70 nodes, each with 16
cores, so I suspect we're a long long way away from that pain point.

All the best!
Chris
- --
 Christopher Samuel Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/ http://twitter.com/vlsci

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlNq4zQACgkQO2KABBYQAh8ErQCcCBFFeB5q27b7AkqfClliUdvC
NJIAn1Cun+yY8zd6IToEsYJELpJTIdGb
=K0XF
-----END PGP SIGNATURE-----