Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: stephen mulcahy (smulcahy_at_[hidden])
Date: 2007-04-18 09:48:18


Hi,

Thanks. I'd actually come across that and tried it also .. but just to
be sure .. here's what I just tried

[smulcahy_at_foo ~]$ ~/openmpi-1.2/bin/mpirun -v --display-map --mca btl
^openib,mvapi --bynode -np 2 --hostfile ~/openmpi.hosts.2only
~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong

and here's the output

[foo:31628] Map for job: 1 Generated by mapping mode: bynode
         Starting vpid: 0 Vpid range: 2 Num app_contexts: 1
         Data for app_context: index 0 app:
/home/smulcahy/IMB/IMB-MPI1-openmpi
                 Num procs: 2
                 Argv[0]: /home/smulcahy/IMB/IMB-MPI1-openmpi
                 Argv[1]: -npmin
                 Argv[2]: 2
                 Argv[3]: pingpong
                 Env[0]: OMPI_MCA_btl=^openib,mvapi
                 Env[1]: OMPI_MCA_rmaps_base_display_map=1
                 Env[2]:
OMPI_MCA_rds_hostfile_path=/home/smulcahy/openmpi.hosts.2only
                 Env[3]:
OMPI_MCA_orte_precondition_transports=8a30584db0828119-ebf73f7c6c29abc1
                 Env[4]: OMPI_MCA_rds=proxy
                 Env[5]: OMPI_MCA_ras=proxy
                 Env[6]: OMPI_MCA_rmaps=proxy
                 Env[7]: OMPI_MCA_pls=proxy
                 Env[8]: OMPI_MCA_rmgr=proxy
                 Working dir: /home/smulcahy (user: 0)
                 Num maps: 0
         Num elements in nodes list: 2
         Mapped node:
                 Cell: 0 Nodename: c0-12 Username: NULL
                 Daemon name:
                         Data type: ORTE_PROCESS_NAME Data Value: NULL
                 Oversubscribed: False Num elements in procs list: 1
                 Mapped proc:
                         Proc Name:
                         Data type: ORTE_PROCESS_NAME Data Value: [0,1,0]
                         Proc Rank: 0 Proc PID: 0 App_context
index: 0

         Mapped node:
                 Cell: 0 Nodename: c0-13 Username: NULL
                 Daemon name:
                         Data type: ORTE_PROCESS_NAME Data Value: NULL
                 Oversubscribed: False Num elements in procs list: 1
                 Mapped proc:
                         Proc Name:
                         Data type: ORTE_PROCESS_NAME Data Value: [0,1,1]
                         Proc Rank: 1 Proc PID: 0 App_context
index: 0
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.0, MPI-1 part
#---------------------------------------------------
# Date : Wed Apr 18 06:45:27 2007
# Machine : x86_64
# System : Linux
# Release : 2.6.9-42.0.2.ELsmp
# Version : #1 SMP Wed Aug 23 13:38:27 BST 2006
# MPI Version : 2.0
# MPI Thread Environment: MPI_THREAD_SINGLE

#
# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#

# List of Benchmarks to run:

# PingPong

#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
        #bytes #repetitions t[usec] Mbytes/sec
             0 1000 1.55 0.00
             1 1000 1.58 0.61
             2 1000 1.57 1.21
             4 1000 1.56 2.44
             8 1000 1.59 4.81
            16 1000 1.83 8.33
            32 1000 1.85 16.52
            64 1000 1.91 31.91
           128 1000 2.01 60.70
           256 1000 2.29 106.82
           512 1000 2.72 179.35
          1024 1000 3.73 261.88
          2048 1000 5.51 354.63
          4096 1000 7.75 504.00
          8192 1000 12.21 639.71
         16384 1000 20.98 744.68
         32768 1000 38.45 812.73
         65536 640 85.59 730.24
        131072 320 161.28 775.06
        262144 160 311.04 803.76
        524288 80 586.65 852.30
       1048576 40 1155.92 865.11
       2097152 20 2258.45 885.56
       4194304 10 4457.09 897.45

The latency and bandwidth still seem incorrect for a gigabit interconnect.

-stephen

Brock Palen wrote:
> Look here:
>
> http://www.open-mpi.org/faq/?category=tuning#selecting-components
>
> General idea
>
> mpirun -np 2 --mca btl ^tcp (to exclude ethernet) replace with
> ^openib (or ^mvapi) to exclude infiniband.
>
> Brock Palen
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
> On Apr 18, 2007, at 8:44 AM, stephen mulcahy wrote:
>
>> Hi,
>>
>> I'm currently conducting some testing on system with gigabit and
>> infiniband interconnects. I'm keen to baseline openmpi over both the
>> gigabit and infiniband interconnects.
>>
>> I've compiled it with defaults and run the Intel MPI Benchmarks
>> PingPong
>> as follows to get an idea of latency and bandwidth between nodes on
>> the
>> given interconnect.
>>
>> ~/openmpi-1.2/bin/mpirun --bynode -np 2 --hostfile ~/openmpi.hosts.80
>> ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>>
>> For some reason, it looks like openmpi is using the infiniband
>> interconnect rather than the gigabit ... or the system I'm testing on
>> has an amazing latency! :)
>>
>> #---------------------------------------------------
>> # Benchmarking PingPong
>> # #processes = 2
>> #---------------------------------------------------
>> #bytes #repetitions t[usec] Mbytes/sec
>> 0 1000 1.63 0.00
>> 1 1000 1.54 0.62
>> 2 1000 1.55 1.23
>> 4 1000 1.54 2.47
>> 8 1000 1.56 4.90
>> 16 1000 1.86 8.18
>> 32 1000 1.94 15.75
>> 64 1000 1.92 31.77
>> 128 1000 1.99 61.44
>> 256 1000 2.25 108.37
>> 512 1000 2.70 180.88
>> 1024 1000 3.64 267.99
>> 2048 1000 5.60 348.89
>>
>> I read some of the FAQs and noted that OpenMPI prefers the faster
>> available interconnect. In an effort to force it to use the gigabit
>> interconnect I ran it as follows,
>>
>> ~/openmpi-1.2/bin/mpirun --mca btl tcp,self --bynode -np 2 --hostfile
>> ~/openmpi.hosts.80 ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>>
>> and
>>
>> ~/openmpi-1.2/bin/mpirun --mca btl_tcp_if_include eth0 --mca btl
>> tcp,self --bynode -np 2 --hostfile ~/openmpi.hosts.80
>> ~/IMB/IMB-MPI1-openmpi -npmin 2 pingpong
>>
>> Neither one resulted in a significantly different benchmark.
>>
>> Am I doing something obviously wrong in how I invoke openmpi here or
>> should I expect this to run over gigabit? Is there an option to mpirun
>> which I can provide to tell me what interconnect it does use?
>>
>> I gave a look at the ompi_info output but couldn't see any indication
>> that infiniband support was compiled in so I'm a little puzzled by
>> this
>> but the results speak for themselves.
>>
>> Any advice on how to force the use of gigabit would be welcomed (I'll
>> use the infiniband interconnect aswell but I'm trying to determine the
>> performance to be had from infiniband for our model so I need to
>> run it
>> with both).
>>
>> Thanks,
>>
>> -stephen
>> --
>> Stephen Mulcahy, Applepie Solutions Ltd., Innovation in Business
>> Center,
>> GMIT, Dublin Rd, Galway, Ireland. +353.91.751262 http://
>> www.aplpi.com
>> Registered in Ireland (289353) (5 Woodlands Avenue, Renmore, Galway)
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Stephen Mulcahy, Applepie Solutions Ltd., Innovation in Business Center,
GMIT, Dublin Rd, Galway, Ireland.  +353.91.751262  http://www.aplpi.com
Registered in Ireland, no. 289353 (5 Woodlands Avenue, Renmore, Galway)