users-request@open-mpi.org wrote:
Send users mailing list submissions to
	users@open-mpi.org

Today's Topics:

   1. Re: Hpl Bench mark and Openmpi rc3 (Jeff Squyres)


----------------------------------------------------------------------

Message: 1
Date: Mon, 17 Oct 2005 10:16:39 -0400
From: Jeff Squyres <jsquyres@open-mpi.org>
Subject: Re: [O-MPI users] Hpl Bench mark and Openmpi rc3
To: Open MPI Users <users@open-mpi.org>
Message-ID: <8557a377fe1f131e23274e10e5f6e250@open-mpi.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed

On Oct 13, 2005, at 1:25 AM, Allan Menezes wrote:

  
   I have a 16 node cluster of x86 machines with FC3 running on the 
head
node. I used a beta version of OSCAR 4.2 for putting together the
cluster. It uses /home/allan as the NFS directory.
    

Greetings Allan.  Sorry for the delay in replying -- we were all at an 
Open MPI working meeting last week, and the schedule got a bit hectic.

Your setup sounds find.

  
I tried Mpich2v1.02p1 and got abench mark of 26GFlops for it approx.
WIth open mpi 1.0RC3 having set the LD_LIBRARY_PATH in .bashrc and the
/opt/openmpi/bin path in .bash_profile in the home directory
    

Two quick notes here:

- Open MPI's mpirun supports the "--prefix" option which obviates 
needing to set these variables in your .bashrc (although setting them 
in permanently makes things easier in the long term -- you don't need 
to always specify --prefix).  See the FAQ for more details on the 
--prefix option:

	http://www.open-mpi.org/faq/?category=running#mpirun-prefix

- OSCAR makes use of environment modules; it contains setup to 
differentiate between the multiple different MPI implementations that 
OSCAR contains.  You can trivially add a modulefile for Open MPI and 
therefore use the "switcher" command to easily switch between all the 
MPI implementations on your OSCAR cluster (once we hit 1.0, we 
anticipate having an OSCAR package).

  
I cannnot seeem to get a performance beyond 9 GFlops approximately. 
The block size
for mpich2 was 120 for best results. For open mpi for N = 22000 I have 
to use block sizes of 10 -11 to get a performance of 9GFlops other 
wise for larger block sizes(NB) it's worse. I used the same N=22000 
for mpich2 and have a 16 port Gigabit Netgear ethernet switch with 
Gigabit realtek8169 ethernet cards. Can any one tell me why the 
performance with open mpi is so low compared to mpich2-1.02p1?
    

There should clearly not be such a wide disparity in performance here; 
we don't see this kind of difference in our own internal testing.

Can you send the output of "ompi_info --all"?

  

Hi Jeff,
   I installed two versions of open mpi slightly different. One on /opt/openmpi or I would get the gfortran error and the other in /home/allan/openmpi
However I do not think that is the problem as the path names are specified in the bahrc and bash_profile files of the /home/allan directory.
I also log into user allan who is not a superuser.On running the open mpi with HPL I use the following command line:
a1> mpirun -mca pls_rsh_orted /home/allan/openmpi/bin/orted -hostfile aa -np 16 ./xhpl
from the directory where xhpl resides such as /homer/open/bench and I use the -mca command pls_rsh_orted as it otherwise comes up with an error that it cannot find the ORTED daemon on machines a1, a2 etc. That is probaly aconfiguration error. However the commands above and the setup described works fine and there are no errors in the HPL.out file, except that it is slow.
I use an atlas BLAS library for creating xhpl from hpl.tar.gz. The make file for hpl uses the atlas libs and the open mpi mpicc compiler for both compilation and linking. and I have zeroed out the MPI macro paths in Make.open(that's what I reanmed the hpl makefile) for make arch=open in hpl directory. Please find attached the ompi_info -all file as requested. Thank you very much:
Allan