Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2005-10-17 09:16:39


On Oct 13, 2005, at 1:25 AM, Allan Menezes wrote:

> I have a 16 node cluster of x86 machines with FC3 running on the
> head
> node. I used a beta version of OSCAR 4.2 for putting together the
> cluster. It uses /home/allan as the NFS directory.

Greetings Allan. Sorry for the delay in replying -- we were all at an
Open MPI working meeting last week, and the schedule got a bit hectic.

Your setup sounds find.

> I tried Mpich2v1.02p1 and got abench mark of 26GFlops for it approx.
> WIth open mpi 1.0RC3 having set the LD_LIBRARY_PATH in .bashrc and the
> /opt/openmpi/bin path in .bash_profile in the home directory

Two quick notes here:

- Open MPI's mpirun supports the "--prefix" option which obviates
needing to set these variables in your .bashrc (although setting them
in permanently makes things easier in the long term -- you don't need
to always specify --prefix). See the FAQ for more details on the
--prefix option:

        http://www.open-mpi.org/faq/?category=running#mpirun-prefix

- OSCAR makes use of environment modules; it contains setup to
differentiate between the multiple different MPI implementations that
OSCAR contains. You can trivially add a modulefile for Open MPI and
therefore use the "switcher" command to easily switch between all the
MPI implementations on your OSCAR cluster (once we hit 1.0, we
anticipate having an OSCAR package).

> I cannnot seeem to get a performance beyond 9 GFlops approximately.
> The block size
> for mpich2 was 120 for best results. For open mpi for N = 22000 I have
> to use block sizes of 10 -11 to get a performance of 9GFlops other
> wise for larger block sizes(NB) it's worse. I used the same N=22000
> for mpich2 and have a 16 port Gigabit Netgear ethernet switch with
> Gigabit realtek8169 ethernet cards. Can any one tell me why the
> performance with open mpi is so low compared to mpich2-1.02p1?

There should clearly not be such a wide disparity in performance here;
we don't see this kind of difference in our own internal testing.

Can you send the output of "ompi_info --all"?

-- 
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/