Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Deadlock with mpi_init_thread + mpi_file_set_view
From: Pascal Deveze (Pascal.Deveze_at_[hidden])
Date: 2011-04-04 09:46:00


Why don't you use the command "mpirun" to run your mpi programm ?

Pascal

fah10_at_[hidden] a écrit :
> Pascal Deveze wrote:
> > Could you check that your programm closes all MPI-IO files before
> calling MPI_Finalize ?
>
> Yes, I checked that. All files should be closed. I've also written a
> small test program,
> which is attached below. The output refers to openmpi-1.5.3 with
> threading support,
> compiled with gcc.
>
> I also tried to use Intel Fortran instead of gfortran and a similar
> test program written in C,
> compiled with gcc or Intel C. However, the result is always the same.
>
>
> Fabian
>
>
> program mpiio
> use mpi
> implicit none
>
> integer(kind=4) :: iprov, fh, ierr
>
> call mpi_init_thread(MPI_THREAD_SERIALIZED, iprov, ierr)
> if (iprov < MPI_THREAD_SERIALIZED) stop 'mpi_init_thread'
>
> call mpi_file_open(MPI_COMM_WORLD, 'test.dat', &
> MPI_MODE_WRONLY + MPI_MODE_CREATE, MPI_INFO_NULL, fh, ierr)
>
> call mpi_file_close(fh, ierr)
>
> call mpi_finalize(ierr)
> end program mpiio
>
> > mpif90 -g mpiio.F90
> > gdb ./a.out
> (gdb) r
> Starting program: a.out
> [Thread debugging using libthread_db enabled]
> [New Thread 0xb7fddb70 (LWP 25930)]
> [New Thread 0xb77dcb70 (LWP 25933)]
> opal_mutex_lock(): Resource deadlock avoided
>
> Program received signal SIGABRT, Aborted.
> 0x0012e416 in __kernel_vsyscall ()
>
> (gdb) bt
> #0 0x0012e416 in __kernel_vsyscall ()
> #1 0x0047f941 in raise (sig=6) at
> ../nptl/sysdeps/unix/sysv/linux/raise.c:64
> #2 0x00482e42 in abort () at abort.c:92
> #3 0x00189239 in opal_mutex_lock (type=COMM_ATTR, key=0xbffff0f4,
> predefined=false) at ../opal/threads/mutex_unix.h:106
> #4 ompi_attr_free_keyval (type=COMM_ATTR, key=0xbffff0f4,
> predefined=false) at attribute/attribute.c:649
> #5 0x001c8c3c in PMPI_Keyval_free (keyval=0x0) at pkeyval_free.c:52
> #6 0x006e3e8d in ADIOI_End_call (comm=0x3100e0, keyval=10,
> attribute_val=0x0, extra_state=0x0) at ad_end.c:82
> #7 0x001895c1 in ompi_attr_delete (type=COMM_ATTR, object=0x3100e0,
> attr_hash=0x80cd258, key=10, predefined=true, need_lock=false)
> at attribute/attribute.c:734
> #8 0x0018995b in ompi_attr_delete_all (type=COMM_ATTR,
> object=0x3100e0, attr_hash=0x80cd258) at attribute/attribute.c:1043
> #9 0x001aa6af in ompi_mpi_finalize () at runtime/ompi_mpi_finalize.c:133
> #10 0x001c06c8 in PMPI_Finalize () at pfinalize.c:46
> #11 0x00151c37 in mpi_finalize_f (ierr=0xbffff2c8) at pfinalize_f.c:62
> #12 0x080489fb in mpiio () at mpiio.F90:15
> #13 0x08048a2b in main ()
> #14 0x0046bce7 in __libc_start_main (main=0x8048a00 <main>, argc=1,
> ubp_av=0xbffff3b4, init=0x8048a50 <__libc_csu_init>, fini=0x8048a40
> <__libc_csu_fini>,
> rtld_fini=0x11eb60 <_dl_fini>, stack_end=0xbffff3ac) at
> libc-start.c:226
> #15 0x080488c1 in _start ()
>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users