Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] IO performance
From: Tom Rosmond (rosmond_at_[hidden])
Date: 2012-02-06 11:39:11


Rob

Thanks, these are the kind of suggestions I was looking for. I will try
them. But I will have to twist some arms to get the 1.5 upgrade. I
might just install a private copy for my tests.

T. Rosmond

On Mon, 2012-02-06 at 10:21 -0600, Rob Latham wrote:
> On Fri, Feb 03, 2012 at 10:46:21AM -0800, Tom Rosmond wrote:
> > With all of this, here is my MPI related question. I recently added an
> > option to use MPI-IO to do the heavy IO lifting in our applications. I
> > would like to know what the relative importance of the dedicated MPI
> > network vis-a-vis the GPFS network for typical MPIIO collective reads
> > and writes. I assume there must be some hand-off of data between the
> > networks during the process, but how is it done, and are there any rules
> > to help understand it. Any insights would be welcome.
>
> There's not really a handoff. MPI-IO on GPFS will call a posix read()
> or write() system call after possibly doing some data massaging. That
> system call sends data over the storage network.
>
> If you've got a fast communication network but a slow storage network,
> then some of the MPI-IO optimizations will need to be adjusted a bit.
> Seems like you'd want to really beef up the "cb_buffer_size".
>
> For GPFS, the big thing MPI-IO can do for you is align writes to
> GPFS. see my next point.
>
> > P.S. I am running with Open-mpi 1.4.2.
>
> If you upgrade to something in the 1.5 series you will get some nice
> ROMIO optimizations that will help you out with writes to GPFS if
> you set the "striping_unit" hint to the GPFS block size.
>
> ==rob
>