Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] slurm and all-srun orterun
From: Sacerdoti, Federico (Federico.Sacerdoti_at_[hidden])
Date: 2008-03-05 11:16:11

Thanks Ralph,

First, we would be happy to test the slurm direct launch capability.
Regarding the failure case, I realize that the IB errors do not directly
affect the orted daemons. This is what we observed:

1. Parallel job started
2. IB errors caused some processes to fail (but not all)
3. slurm tears down entire job, attempting to kill all orted and their

We want this behavior: if any process of a parallel job dies, all
processes should be stopped. The orted daemons in charge of processes
that did not fail are the problem, as slurm was not able to kill them.
Sounds like this is a known issue in openmpi 1.2.x.

In any case, the new direct launching methods sound promising. I am
surprised there are licensing issues with Slurm, is this a GPL-and-BSD
issue? I am CC'ing slurm author Moe; he may be able to help.

Thanks again and I look forward to testing the direct launch,

-----Original Message-----
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
Behalf Of Ralph Castain
Sent: Monday, March 03, 2008 8:19 PM
To: Open MPI Users <users_at_[hidden]>
Cc: Ralph Castain
Subject: Re: [OMPI users] slurm and all-srun orterun


I don't monitor the user list any more, but a friendly elf sent this
to me.

I'm not entirely sure what problem might be causing the behavior you are
seeing. Neither mpirun nor any orted should be impacted by IB problems
they aren't MPI processes and thus never interact with IB. Only
procs touch the IB subsystem - if an application proc fails, the orted
should see that and correctly order the shutdown of the job. So if you
having IB problems, that wouldn't explain daemons failing.

If a daemon is aborting, that will cause problems in 1.2.x. We have
that SLURM (even though the daemons are launched via srun) doesn't
tell us when this happens, leaving Open MPI vulnerable to "hangs" as it
attempts to cleanup and finds it can't do it. I'm not sure why you would
a daemon die, though - the fact that an application process failed
cause that to happen. Likewise, it would seem strange that the
process would fail and the daemon not notice - this has nothing to do
slurm, but is just a standard Linux "waitpid" method.

The most likely reason for the behavior you describe is that an
process encounters an IB problem which blocks communication - but the
process doesn't actually abort or terminate, it just hangs there. In
case, the orted doesn't see the process exit, so the system doesn't know
should take any action.

That said, we know that 1.2.x has problems with clean shutdown in
situations. Release 1.3 (when it comes out) addresses these issues and
appears (from our testing, at least) to be much more reliable about
You should see a definite improvement in the detection of process
and subsequent cleanup.

As for your question, I am working as we speak on two new launch modes
Open MPI:

1. "direct" - this uses mpirun to directly launch the application
without use of the intermediate daemons.

2. "standalone" - this uses the native launch command to simply launch
application processes, without use of mpirun or the intermediate

The initial target environments for these capabilities are TM and SLURM.
latter poses a bit of a challenge as we cannot use their API due to
licensing issues, so it will come a little later. We have a design for
getting around the problem - the ordering is more driven by priorities
anything technical.

The direct launch capability -may- be included in 1.3 assuming it can be
completed in time for the release. If not, it will almost certainly be
1.3.1. I'm expecting to complete the TM version in the next few days,
perhaps get the SLURM version working sometime this month - but they
need validation before being included in an official release.

I can keep you posted if you like - once this gets into our repository,
are certainly welcome to try it out. I would welcome feedback on it.

Hope that helps

>> From: "Sacerdoti, Federico" <Federico.Sacerdoti_at_[hidden]>
>> Date: March 3, 2008 12:44:39 PM EST
>> To: "Open MPI Users" <users_at_[hidden]>
>> Subject: [OMPI users] slurm and all-srun orterun
>> Reply-To: Open MPI Users <users_at_[hidden]>
>> Hi,
>> We are migrating to openmpi on our large (~1000 node) cluster, and
>> plan
>> to use it exclusively on a multi-thousand core infiniband cluster in
>> the
>> near future. We had extensive problems with parallel processes not
>> dying
>> after a job crash, which was largely solved by switching to the slurm
>> resource manager.
>> While orterun supports slurm, it only uses the srun facility to
>> the "orted" daemons, which then start the actual user process
>> themselves. In our recent migration to openmpi, we have noticed
>> occasions where orted did not correctly clean up after a parallel job
>> crash. In most cases the crash was due to an infiniband error. Most
>> worryingly slurm was not able to cleanup the orted, and it along with
>> user processes were left running.
>> At SC07 I was told that there is some talk of using srun to launch
>> both
>> orted and user processes, or alternatively use srun only. Either
>> solve the cleanup problem, in our experience. Is Rolf Castain on this
>> list?
>> Thanks,
>> Federico
>> P.S.
>> We use proctrack/linuxproc slurm process tracking plugin. As noted in
>> the config man page, this may fail to find certain processes and
>> explain
>> why slurm could not clean up orted effectively.
>> man slurm.conf(5), version 1.2.22:
>> NOTE: "proctrack/linuxproc" and "proctrack/pgid" can fail to identify
>> all processes associated with a job since processes can become a
>> of the init process (when the parent process terminates) or change
>> their
>> process group. To reliably track all processes, one of the other
>> mechanisms utilizing kernel modifications is preferable.
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]

users mailing list