Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brock Palen (brockp_at_[hidden])
Date: 2007-04-18 08:58:22


Look here:

http://www.open-mpi.org/faq/?category=tuning#selecting-components

General idea

mpirun -np 2 --mca btl ^tcp (to exclude ethernet) replace with
^openib (or ^mvapi) to exclude infiniband.

Brock Palen
Center for Advanced Computing
brockp_at_[hidden]
(734)936-1985

On Apr 18, 2007, at 8:44 AM, stephen mulcahy wrote:

> Hi,
>
> I'm currently conducting some testing on system with gigabit and
> infiniband interconnects. I'm keen to baseline openmpi over both the
> gigabit and infiniband interconnects.
>
> I've compiled it with defaults and run the Intel MPI Benchmarks
> PingPong
> as follows to get an idea of latency and bandwidth between nodes on
> the
> given interconnect.
>
> ~/openmpi-1.2/bin/mpirun --bynode -np 2 --hostfile ~/openmpi.hosts.80
> ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>
> For some reason, it looks like openmpi is using the infiniband
> interconnect rather than the gigabit ... or the system I'm testing on
> has an amazing latency! :)
>
> #---------------------------------------------------
> # Benchmarking PingPong
> # #processes = 2
> #---------------------------------------------------
> #bytes #repetitions t[usec] Mbytes/sec
> 0 1000 1.63 0.00
> 1 1000 1.54 0.62
> 2 1000 1.55 1.23
> 4 1000 1.54 2.47
> 8 1000 1.56 4.90
> 16 1000 1.86 8.18
> 32 1000 1.94 15.75
> 64 1000 1.92 31.77
> 128 1000 1.99 61.44
> 256 1000 2.25 108.37
> 512 1000 2.70 180.88
> 1024 1000 3.64 267.99
> 2048 1000 5.60 348.89
>
> I read some of the FAQs and noted that OpenMPI prefers the faster
> available interconnect. In an effort to force it to use the gigabit
> interconnect I ran it as follows,
>
> ~/openmpi-1.2/bin/mpirun --mca btl tcp,self --bynode -np 2 --hostfile
> ~/openmpi.hosts.80 ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>
> and
>
> ~/openmpi-1.2/bin/mpirun --mca btl_tcp_if_include eth0 --mca btl
> tcp,self --bynode -np 2 --hostfile ~/openmpi.hosts.80
> ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>
> Neither one resulted in a significantly different benchmark.
>
> Am I doing something obviously wrong in how I invoke openmpi here or
> should I expect this to run over gigabit? Is there an option to mpirun
> which I can provide to tell me what interconnect it does use?
>
> I gave a look at the ompi_info output but couldn't see any indication
> that infiniband support was compiled in so I'm a little puzzled by
> this
> but the results speak for themselves.
>
> Any advice on how to force the use of gigabit would be welcomed (I'll
> use the infiniband interconnect aswell but I'm trying to determine the
> performance to be had from infiniband for our model so I need to
> run it
> with both).
>
> Thanks,
>
> -stephen
> --
> Stephen Mulcahy, Applepie Solutions Ltd., Innovation in Business
> Center,
> GMIT, Dublin Rd, Galway, Ireland. +353.91.751262 http://
> www.aplpi.com
> Registered in Ireland (289353) (5 Woodlands Avenue, Renmore, Galway)
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>