Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Open MPI v1.3.3rc1 has escaped
From: Peter Kjellstrom (cap_at_[hidden])
Date: 2009-07-10 10:16:51


On Friday 10 July 2009, Jeff Squyres wrote:
> On Jul 10, 2009, at 6:54 AM, Peter Kjellstrom wrote:
> > On Friday 10 July 2009, Jeff Squyres wrote:
> > > http://www.open-mpi.org/software/ompi/v1.3/
> > >
> > > Please test!
> >
> > Built and ran just like(*) 1.3.2 on my limited tests (that is,
> > worked quite
> > well)
> >
> > OS:CentOS-5.3.x86_64 with its own OFED
> > HW:ConnectX-DDR on a Nehalem dual-quad platform
> > Size:4 nodes
> > Compilers: Intel-11.0-074 (built with C/C++/F90, tested C and F90)
> >
> > (*) It seems to still have the problem reported in:
>
> Thanks for the testing! We haven't updated the default tuned settings
> -- I don't think it'll make the 1.3.3 cutoff. There's a number of
> other important fixes in 1.3.3 that we wanted to get out there. 1.3.4
> will come along presently.

Right, just don't forget about it. With the current settings performance for
certain package sizes truely sucks. The thread I referenced was about CPMD
performance so this is not just a syntetic thing. IMHO the default should be
bruck to pair-wise (without ever going near basic linear).

The impact with the current config is that alltoall (maybe others are bad too
but alltoall is the only one I've tested) in the range ~100+ bytes to a few
Kbytes sucks big times (atleast for nranks >= 32).

Keep up the good work,
 Peter