Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI daemon died unexpectedly
From: Grzegorz Maj (maju3_at_[hidden])
Date: 2012-03-27 07:05:51

John, thank you for your reply.

I checked the system logs and there are no signs of oom killer.

What do you mean by cleaning 'orphan' processes? Should I check if
there are any processes left after each job execution? I have always
been assuming that when mpirun terminates, everything is cleaned up.
Currently there are no processes left on the nodes. The failure
happend on Friday and after that tens of similar jobs completed

Grzegorz Maj

2012/3/27 John Hearns <hearnsj_at_[hidden]>:
> Have you checked the system logs on the machines where this is running?
> Is it perhaps that the processes use lots of memory and the Out Of
> Memory (OOM) killer is killing them?
> Also check all nodes for left-over 'orphan' processes which are still
> running after a job finishes - these should be killed or the node
> rebooted.
> On 27/03/2012, Grzegorz Maj <maju3_at_[hidden]> wrote:
>> Hi,
>> I have an MPI application using ScaLAPACK routines. I'm running it on
>> OpenMPI 1.4.3. I'm using mpirun to launch less than 100 processes. I'm
>> using it quite extensively for almost two years and it almost always
>> works fine. However, once every 3-4 months I get the following error
>> during the execution:
>> --------------------------------------------------------------------------
>> A daemon (pid unknown) died unexpectedly on signal 1  while attempting to
>> launch so we are aborting.
>> There may be more information reported by the environment (see above).
>> This may be because the daemon was unable to find all the needed shared
>> libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
>> location of the shared libraries on the remote nodes and this will
>> automatically be forwarded to the remote nodes.
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun noticed that the job aborted, but has no info as to the process
>> that caused that situation.
>> --------------------------------------------------------------------------
>> --------------------------------------------------------------------------
>> mpirun was unable to cleanly terminate the daemons on the nodes shown
>> below. Additional manual cleanup may be required - please refer to
>> the "orte-clean" tool for assistance.
>> --------------------------------------------------------------------------
>> It says that the daemon died while attempting to launch, but my
>> application (MPI grid) was running for about 14 minutes before it
>> failed. I can say that based on the log messages I'm producing during
>> the execution of my application. There is no more information from
>> mpirun. One more thing I know is that mpirun exit status was 1, but I
>> guess it is not very helpful. There are no core files.
>> I would appreciate any suggestions on how to debug this issue.
>> Regards,
>> Grzegorz Maj
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]