Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Parallel file write in fortran (+mpi)
From: Nicolas Bock (nicolasbock_at_[hidden])
Date: 2010-02-02 16:50:02

Hi Laurence,

I don't know whether it's as bad as a deadly sin, but for us parallel writes
are a huge problem and we get complete garbage in the file. Take a look at:

Implementing MPI-IO Atomic Mode and Shared File Pointers Using MPI One-Sided
Communication, * Robert Latham,*Robert Ross*, *Rajeev Thakur, *International
Journal of High Performance Computing Applications*, *21*, 132 (2007).

They describe an implemenation of a "mutex" like object in MPI. If you
protect writes to the file with an exclusive lock you can serialize the
writes and make use of NFS's close to open cache coherence.


* *
On Tue, Feb 2, 2010 at 08:27, Laurence Marks <L-marks_at_[hidden]>wrote:

> I have a question concerning having many processors in a mpi job all
> write to the same file -- not using mpi calls but with standard
> fortran I/O. I know that this can lead to consistency issues, but it
> can also lead to OS issues with some flavors of nfs.
> At least in fortran, there is nothing "wrong" with doing this. My
> question is whether this is "One of the Seven Deadly Sins" of mpi
> programming, or just frowned on. (That is, it should be OK even if it
> leads to nonsense files, and not lead to OS issues.) If it is a sin, I
> would appreciate a link to where this is spelt out in some "official"
> document or similar.
> --
> Laurence Marks
> Department of Materials Science and Engineering
> MSE Rm 2036 Cook Hall
> 2220 N Campus Drive
> Northwestern University
> Evanston, IL 60208, USA
> Tel: (847) 491-3996 Fax: (847) 491-7820
> email: L-marks at northwestern dot edu
> Web:
> Chair, Commission on Electron Crystallography of IUCR
> Electron crystallography is the branch of science that uses electron
> scattering and imaging to study the structure of matter.
> _______________________________________________
> users mailing list
> users_at_[hidden]