Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Deadlocks and warnings from libevent when using MPI_THREAD_MULTIPLE
From: Ralph Castain (rhc_at_[hidden])
Date: 2014-04-25 17:40:29


We don't fully support THREAD_MULTIPLE, and most definitely not when using IB. We are planning on extending that coverage in the 1.9 series

On Apr 25, 2014, at 2:22 PM, Markus Wittmann <markus.wittmann_at_[hidden]> wrote:

> Hi everyone,
>
> I'm using the current Open MPI 1.8.1 release and observe
> non-deterministic deadlocks and warnings from libevent when using
> MPI_THREAD_MULTIPLE. Open MPI has been configured with
> --enable-mpi-thread-multiple --with-tm --with-verbs (see attached
> config.log)
>
> Attached is a sample application that spawns a thread for each process
> after MPI_Init_thread has been called. The thread then calls MPI_Recv
> which blocks until the matching MPI_Send is called just before
> MPI_Finalize is called in the main thread. (AFAIK MPICH uses such kind
> of facility to implement a progress thread) Meanwhile the main thread
> exchanges data with its right/left neighbor via ISend/IRecv.
>
> I only see this, when the MPI processes run on separate nodes like in
> the following:
>
> $ mpiexec -n 2 -map-by node ./test
> [0] isend/irecv.
> [0] progress thread...
> [0] waitall.
> [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one event_base_loop can run on each event_base at once.
> [1] isend/irecv.
> [1] progress thread...
> [1] waitall.
> [warn] opal_libevent2021_event_base_loop: reentrant invocation. Only one event_base_loop can run on each event_base at once.
>
> <no further output...>
>
> Can anybody confirm this?
>
> Best regards,
> Markus
>
> --
> Markus Wittmann, HPC Services
> Friedrich-Alexander-Universität Erlangen-Nürnberg
> Regionales Rechenzentrum Erlangen (RRZE)
> Martensstrasse 1, 91058 Erlangen, Germany
> http://www.rrze.fau.de/hpc/
> <info.tar.bz2><test.c>_______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users