Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] a question about [MPI]IO on systems without network filesystem
From: Richard Treumann (treumann_at_[hidden])
Date: 2010-10-19 15:10:03


As Rob mentions

There are three capabilities to consider:

1) The process (or processes) that will do the I/O are members of the file
handle's hidden communicator and the call is collective

2)) The process (or processes) that will do the I/O are members of the
file handle's hidden communicator but the call is non-collective and made
by a remote rank

3) The process (or processes) that will do the I/O are not members. The
MPI_COMM_SELF mention would probably be this second case.

Number 2 & 3 are harder but still an implementation option. The standard
does not require or prohibit them.

Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

From:
Rob Latham <robl_at_[hidden]>
To:
Open MPI Users <users_at_[hidden]>
Date:
10/19/2010 02:47 PM
Subject:
Re: [OMPI users] a question about [MPI]IO on systems without network
filesystem
Sent by:
users-bounces_at_[hidden]

On Thu, Sep 30, 2010 at 09:00:31AM -0400, Richard Treumann wrote:
> It is possible for MPI-IO to be implemented in a way that lets a single
> process or the set of process on a node act as the disk i/O agents for
the
> entire job but someone else will need to tell you if OpenMPI can do
this,
> I think OpenMPI built on the ROMIO MPI-IO implementation and based on my

> outdated knowledge of ROMIO, I would be a bit surprised if it has his
> option.

SURPRISE!!! ROMIO has been able to do this since about 2002 (It was
my first ROMIO project when I came to Argonne).

now, if you do independent i/o or you do i/o on comm_self, then ROMIO
can't really do anything for you.

But...
- if you use collective I/O
- and you set the "cb_config_list" to contain the machine name of the
  one node with a disk (or if everyone has a disk, pick one to be the
  master)
- and you set "romio_no_indep_rw" to "enable"

then two things will happen. first, ROMIO will enter "deferred open"
mode, meaning only the designated I/O aggregators will open the file.
second, your collective MPI_File_*_all calls will all go through the
one node you gave in the cb_config_list.

Try it and if it does/doesn't work, I'd like to hear.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users