Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI
From: Robert Latham (robl_at_[hidden])
Date: 2008-05-29 16:33:55


On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote:
> > Oh, I see you want to use ordered i/o in your application. PVFS
> > doesn't support that mode. However, since you know how much data each
> > process wants to write, a combination of MPI_Scan (to compute each
> > processes offset) and MPI_File_write_at_all (to carry out the
> > collective i/o) will give you the same result with likely better
> > performance (and has the nice side effect of working with pvfs).
>
> I don't understand very well this... what do I need to change in my code ?

MPI_File_write_ordered has an interesting property (which you probably
know since you use it, but i'll spell it out anyway): writes end up
in the file in rank-order, but are not necessarily carried out in
rank-order.

Once each process knows the offsets and lengths of the writes the
other process will do, that process can writes its data. Observe that
rank 0 can write immediately. Rank 1 only needs to know how much data
rank 0 will write. and so on.

Rank N can compute its offset by knowing how much data the proceeding
N-1 processes want to write. The most efficent way to collect this is
to use MPI_Scan and collect a sum of data:

http://www.mpi-forum.org/docs/mpi-11-html/node84.html#Node84

Once you've computed these offsets, MPI_File_write_at_all has enough
information to cary out a collective write of the data.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA                 B29D F333 664A 4280 315B