Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Allan Menezes (amenezes007_at_[hidden])
Date: 2005-10-13 00:25:07

   I have a 16 node cluster of x86 machines with FC3 running on the head
node. I used a beta version of OSCAR 4.2 for putting together the
cluster. It uses /home/allan as the NFS directory.
I tried Mpich2v1.02p1 and got abench mark of 26GFlops for it approx.
WIth open mpi 1.0RC3 having set the LD_LIBRARY_PATH in .bashrc and the
/opt/openmpi/bin path in .bash_profile in the home directory I cannnot
seeem to get a performance beyond 9 GFlops approximately. The block size
for mpich2 was 120 for best results. For open mpi for N = 22000 I have
to use block sizes of 10 -11 to get a performance of 9GFlops other wise
for larger block sizes(NB) it's worse. I used the same N=22000 for
mpich2 and have a 16 port Gigabit Netgear ethernet switch with Gigabit
realtek8169 ethernet cards. Can any one tell me why the performance with
open mpi is so low compared to mpich2-1.02p1?
Thanking you in anticipation,