Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] [slurm-dev] Re: slurm-dev Memory accounting issues with mpirun (was Re: Open-MPI build of NAMD launched from srun over 20% slowed than with mpirun)
From: Christopher Samuel (samuel_at_[hidden])
Date: 2013-08-07 03:42:17


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 07/08/13 16:59, Janne Blomqvist wrote:

> That is, the memory accounting is per task, and when launching
> using mpirun the number of tasks does not correspond to the number
> of MPI processes, but rather to the number of "orted" processes (1
> per node).

That appears to be correct, I am seeing 1 task in the batch and 68
tasks for orted when I use mpirun whilst I see 1 task in the batch and
1104 tasks as namd2 when I use srun.

I could understand how that might result in Slurm (wrongly) thinking
that a single task is using more than its allowed memory per tasks,
but I'm not sure I understand how that could lead to Slurm thinking
the job is using vastly more memory than it actually is though.

cheers,
Chris
- --
 Christopher Samuel Senior Systems Administrator
 VLSCI - Victorian Life Sciences Computation Initiative
 Email: samuel_at_[hidden] Phone: +61 (0)3 903 55545
 http://www.vlsci.org.au/ http://twitter.com/vlsci

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIB+lgACgkQO2KABBYQAh8uqgCdGuA03jCEdJVJE2dJGBHEJjb/
WY4An3em/48L25xq4Ui/GHijSJY2Oo6T
=Zk4G
-----END PGP SIGNATURE-----