Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view
From: Rob Latham (robl_at_[hidden])
Date: 2011-04-01 16:20:17


On Thu, Mar 31, 2011 at 01:03:50PM -0400, fah10_at_[hidden] wrote:
> Hi
> I've compiled Open-MPI 1.4.3 with --enable-mpi-threads and I'm always
> getting a deadlock when calling mpi_file_set_view.
> The Fortran program which calls the routines hasn't opened any extra
> thread when the error occurs.
> The program works fine when I use (mpi_init instead of mpi_init_thread
> (MPI_THREAD_SERIALIZED)) or (start the program with only 1 mpi process)
> On abort, I'm getting the backtrace attached below.
>
> Does anyone know how to fix this?

Even inside MPICH2, I have given little attention to threadsafety and
the MPI-IO routines. In MPICH2, each MPI_File* function grabs the big
critical section lock -- not pretty but it gets the job done.

When ported to OpenMPI, I don't know how the locking works.
Furthermore, the MPI-IO library inside OpenMPI-1.4.3 is pretty old. I
wonder if the locking we added over the years will help? Can you try
openmpi-1.5.3 and report what happens?

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA