Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPIIO and EXT3 file systems
From: Rob Latham (robl_at_[hidden])
Date: 2011-08-29 15:22:06


On Mon, Aug 22, 2011 at 08:38:52AM -0700, Tom Rosmond wrote:
> Yes, we are using collective I/O (mpi_file_write_at_all,
> mpi_file_read_at_all). The swaping of fortran and mpi-io are just
> branches in the code at strategic locations. Although the mpi-io files
> are readable with fortran direct access, we don't do so from within the
> application because of different data organization in the files.
>
> > Do you use MPI datatypes to describe either a file view or the
> > application data? These noncontiguous in memory and/or noncontiguous
> > in file access patterns will also trigger fcntl lock calls. You can
> > use an MPI-IO hint to disable data sieving, at a potentially
> > disastrous performance cost.
>
> Yes, we use an 'mpi_type_indexed' datatype to describe the data
> organization.
>
> Any thoughts about the XFS vs EXT3 question?

We have machines at the lab with XFS and machines with EXT3: I can't
say I have ever seen an MPI-IO problem we could trace to the specific
file system. The MPI-IO library just makes a bunch of posix I/O
calls under the hood: if write(2), open(2), and friends are broken for
XFS or EXT3, those kinds of bugs get a lot of attention :>

At this point the usual course of action is "post a small reproducing
test case". Your first message said this was a big code, so perhaps
that will not be so easy...

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA