Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Deadlocks and warnings from libevent when using MPI_THREAD_MULTIPLE
From: Ralph Castain (rhc_at_[hidden])
Date: 2014-04-28 12:23:45


It isn't that simple. In some cases, THREAD_MULTIPLE works just fine. In some, it doesn't. Trying to devise logic that accurately detects when it does and doesn't work would be extremely difficult, and in many cases application-dependent. If we disable it for everyone, then those who can make it work get upset.

We don't like the situation either :-(

On Apr 28, 2014, at 8:03 AM, Jeffrey A Cummings <Jeffrey.A.Cummings_at_[hidden]> wrote:

> Wouldn't you save yourself work and your users confusion if you disabled options that don't currently work?
>
>
> Jeffrey A. Cummings
> Engineering Specialist
> Performance Modeling and Analysis Department
> Systems Analysis and Simulation Subdivision
> Systems Engineering Division
> Engineering and Technology Group
> The Aerospace Corporation
> 571-307-4220
> jeffrey.a.cummings_at_[hidden]
>
>
>
> From: Ralph Castain <rhc_at_[hidden]>
> To: Open MPI Users <users_at_[hidden]>,
> Date: 04/25/2014 05:40 PM
> Subject: Re: [OMPI users] Deadlocks and warnings from libevent when using MPI_THREAD_MULTIPLE
> Sent by: "users" <users-bounces_at_[hidden]>
>
>
>
> We don't fully support THREAD_MULTIPLE, and most definitely not when using IB. We are planning on extending that coverage in the 1.9 series
>
>
> On Apr 25, 2014, at 2:22 PM, Markus Wittmann <markus.wittmann_at_[hidden]> wrote:
>
> > Hi everyone,
> >
> > I'm using the current Open MPI 1.8.1 release and observe
> > non-deterministic deadlocks and warnings from libevent when using
> > MPI_THREAD_MULTIPLE. Open MPI has been configured with
> > --enable-mpi-thread-multiple --with-tm --with-verbs (see attached
> > config.log)
> >
> > Attached is a sample application that spawns a thread for each process
> > after MPI_Init_thread has been called. The thread then calls MPI_Recv
> > which blocks until the matching MPI_Send is called just before
> > MPI_Finalize is called in the main thread. (AFAIK MPICH uses such kind
> > of facility to implement a progress thread) Meanwhile the main thread
> > exchanges data with its right/left neighbor via ISend/IRecv.
> >
> > I only see this, when the MPI processes run on separate nodes like in
> > the following:
> >
> > $ mpiexec -n 2 -map-by node ./test
> > [0] isend/irecv.
> > [0] progress thread...
> > [0] waitall.
> > [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one event_base_loop can run on each event_base at once.
> > [1] isend/irecv.
> > [1] progress thread...
> > [1] waitall.
> > [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one event_base_loop can run on each event_base at once.
> >
> > <no further output...>
> >
> > Can anybody confirm this?
> >
> > Best regards,
> > Markus
> >
> > --
> > Markus Wittmann, HPC Services
> > Friedrich-Alexander-Universität Erlangen-Nürnberg
> > Regionales Rechenzentrum Erlangen (RRZE)
> > Martensstrasse 1, 91058 Erlangen, Germany
> > http://www.rrze.fau.de/hpc/
> > <info.tar.bz2><test.c>_______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users