Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] ROMIO Podcast
From: Rob Latham (robl_at_[hidden])
Date: 2012-02-21 12:06:34


On Mon, Feb 20, 2012 at 06:11:53PM -0500, Rayson Ho wrote:
> BTW, since most of the interviewees are opensource project
> maintainers, next time can you ask them how much external contribution
> they get (%), and who are the main external contributors (students?
> HPC labs? Industry?), and how do they handle external contributions
> (need copyright assignment?). And how do they handle testing, and
> performance regression...

external contributions: I wish I had more quantitative numbers for
you. I don't develop on a Lustre system, so we were grateful to the
community for contributing and testing an improved Lustre driver for
ROMIO. Weikuan Yu started work on a Lustre driver while he worked at
Oak Ridge, then Sun/CFS contributed some more improvements. Pascal
Deveze from Bull and Martin Pokorny from NRAO helped carry it over
the finish line, contributing some important bug fixes and nice little
performance improvement tweaks.

IBM has been a great industry partner, contributing improvements to
all of MPICH2. For BlueGene, IBM contributed a block-aligned
collective I/O implementation and an I/O aggregation strategy that
works better for the BlueGene topology. They also contributed what
we are calling "64 bit MPI_Aint" which works around a problem with MPI
file views and platforms with a 32 bit integer.

Our best academic partner -- and really at this point we should
consider them co-maintainers -- is Northwestern University. I've
worked with Wei-Keng for a decade and am always happy to see a
question, suggestion, or patch from him in my mailbox. Northwestern
provided us some great students over the years as well. Avery Ching
and Keenin Coloma did a lot of good work on MPI-IO before the bay area
lured them to industry.

ROMIO's testing and performance regression framework is honestly a
shambles. Part of that is a challenge with the MPI-IO interface
itself. For MPI messaging you exercise the API and you have pretty
much covered everything. MPI-IO, though, introduces hints. These
hints are great for tuning but make the testing "surface area" a lot
larger. We are probably going to have a chance to improve things
greatly with some recently funded proposals.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA