Message: 2 Date: Tue, 18 Oct 2005 08:48:45 -0600 From: "Tim S. Woodall" <twoodall@lanl.gov> Subject: Re: [O-MPI users] Hpl Bench mark and Openmpi rc3 (Jeff Squyres) To: Open MPI Users <users@open-mpi.org> Message-ID: <43550B4D.6080509@lanl.gov> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> Hi Jeff,
>    I installed two versions of open mpi slightly different. One on 
> /opt/openmpi or I would get the gfortran error and the other in 
> /home/allan/openmpi
> However I do not think that is the problem as the path names are 
> specified in the bahrc and bash_profile files of the /home/allan directory.
> I also log into user allan who is not a superuser.On running the open 
> mpi with HPL I use the following command line:
> a1> mpirun -mca pls_rsh_orted /home/allan/openmpi/bin/orted -hostfile aa 
> -np 16 ./xhpl
> from the directory where xhpl resides such as /homer/open/bench and I 
> use the -mca command pls_rsh_orted as it otherwise comes up with an 
> error that it cannot find the ORTED daemon on machines a1, a2 etc. That 
> is probaly aconfiguration error. However the commands above and the 
> setup described works fine and there are no errors in the HPL.out file, 
> except that it is slow.
> I use an atlas BLAS library for creating xhpl from hpl.tar.gz. The make 
> file for hpl uses the atlas libs and the open mpi mpicc compiler for 
> both compilation and linking. and I have zeroed out the MPI macro paths 
> in Make.open(that's what I reanmed the hpl makefile) for make arch=open 
> in hpl directory. Please find attached the ompi_info -all file as 
> requested. Thank you very much:
> Allan
> 
> 
  

We've done linpack runs recently w/ Infiniband, which result in performance
comparable to mvapich, but not w/ the tcp port. Can you try running w/ an
earlier version, specify on the command line:

-mca pml teg

I'm interested in seeing if there is any performance difference.

Thanks,
Tim


------------------------------

_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

End of users Digest, Vol 112, Issue 1
*************************************

Hi Tim,
  I tried the same cluster (16 node x86) with the switches -mca pml teg and I get good performance of 24.52Gflops at N=22500
and Block size NB=120.
My command line now looks like :
a1> mpirun -mca pls_rsh_orted /home/allan/openmpi/bin/orted -mca pml teg -hostile aa -np 16 ./xhpl
hostfile = aa, containing the addresses of the 16 machines.
I am using a GS116 16 port netgear Gigabit ethernet switch with Gnet realtek gig ethernet cards
Why, PLEASE, do these switches pml teg make such a difference? It's 2.6 times more performance in GFlops than what I was getting without them.
I tried version rc3 and not an earlier version.
Thank you very much for your assistance!
Best wishes
Allan