Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Ralph Castain (rhc_at_[hidden])
Date: 2007-01-24 08:25:25


Hi Geoff

On 1/23/07 4:31 PM, "Geoff Galitz" <geoff_at_[hidden]> wrote:

>
>
> Hello,
>
> On the following system:
>
> OpenMPI 1.1.1
> SGE 6.0 (with tight integration)
> Scientific Linux 4.3
> Dual Dual-Core Opterons
>
>
> MPI jobs are oversubscribing to the nodes. No matter where jobs are
> launched by the scheduler, they always stack up on the first node
> (node00) and continue to stack even though the system load exceeds 6
> (on a 4 processor box). Eeach node is defined as 4 slots with 4 max
> slots. The MPI jobs launch via "mpirun -np (some-number-of-
> processors)" from within the scheduler.

I'm afraid I don't understand the situation. Are you saying that all of the
processes in a single job are trying to execute on node00??

Or are you saying that multiple mpirun's are all executing application
processes using the same nodes? In other words, multiple mpiruns are not
recognizing that another mpirun has already used the job slots on a node and
therefore the sum of the mpiruns is overloading the node?

If the latter, then let me know and I'll provide a more thorough
explanation. The short answer, though, is yes - that would be true. Each
mpirun would have no knowledge of what is happening on a node due to another
instance of mpirun.

>
> It seems to me that MPI is not detecting that the nodes are
> overloaded and that due to the way the job slots are defined and how
> mpirun is being called. If I read the documentation correctly, a
> single mpirun run consumes one job slot no matter the number of
> processes which are launched. We can chagne the number of job slots,
> but then we expect to waste processors since only one mpirun job will
> run on any node, even if the job is only a two processor job.
>
> Can someone enlighten me?
>
> -geoff
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users