Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Best way to reduce 3D array
From: Rob Latham (robl_at_[hidden])
Date: 2010-04-05 15:09:09

On Tue, Mar 30, 2010 at 11:51:39PM +0100, Ricardo Reis wrote:
> If using the master/slace IO model, would it be better to cicle
> through all the process and each one would write it's part of the
> array into the file. This file would be open in "stream" mode...
> like
> do p=0,nprocs-1
> if(my_rank.eq.i)then
> openfile (append mode)
> write_to_file
> closefile
> endif
> call MPI_Barrier(world,ierr)
> enddo

Note that there's no guarantee of the order here, though. Nothing
prevents rank 30 from hitting that loop before rank 2 does. To ensure
order, you could MPI_SEND a token around a ring of MPI processes.

One approach might be to use MPI_SCAN to collect offsets (the amount
of data each process will write) and then do an MPI_FILE_WRITE_AT_ALL.

If you are stuck with NFS, then yes, send to master.


Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA