Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view
From: Rob Latham (robl_at_[hidden])
Date: 2011-04-01 16:20:17


On Thu, Mar 31, 2011 at 01:03:50PM -0400, fah10_at_[hidden] wrote:
> Hi
> I've compiled Open-MPI 1.4.3 with --enable-mpi-threads and I'm always
> getting a deadlock when calling mpi_file_set_view.
> The Fortran program which calls the routines hasn't opened any extra
> thread when the error occurs.
> The program works fine when I use (mpi_init instead of mpi_init_thread
> (MPI_THREAD_SERIALIZED)) or (start the program with only 1 mpi process)
> On abort, I'm getting the backtrace attached below.
>
> Does anyone know how to fix this?

Even inside MPICH2, I have given little attention to threadsafety and
the MPI-IO routines. In MPICH2, each MPI_File* function grabs the big
critical section lock -- not pretty but it gets the job done.

When ported to OpenMPI, I don't know how the locking works.
Furthermore, the MPI-IO library inside OpenMPI-1.4.3 is pretty old. I
wonder if the locking we added over the years will help? Can you try
openmpi-1.5.3 and report what happens?

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA