Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] How to run Open MPI over TCP (Ethernet)
From: Bibrak Qamar (bibrakc_at_[hidden])
Date: 2014-05-23 00:12:56


Here the output of ifconfig

*-bash-3.2$ ssh compute-0-15 /sbin/ifconfig*
eth0 Link encap:Ethernet HWaddr 78:E7:D1:61:C6:F4
          inet addr:10.1.255.239 Bcast:10.1.255.255 Mask:255.255.0.0
          inet6 addr: fe80::7ae7:d1ff:fe61:c6f4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:63715944 errors:0 dropped:0 overruns:0 frame:0
          TX packets:66225083 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:85950530550 (80.0 GiB) TX bytes:88970954416 (82.8 GiB)
          Memory:fbe60000-fbe80000

ib0 Link encap:InfiniBand HWaddr
80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
          inet addr:192.168.1.15 Bcast:192.168.1.255 Mask:255.255.255.0
          inet6 addr: fe80::202:c903:a:6f81/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:85388965 errors:0 dropped:0 overruns:0 frame:0
          TX packets:94530341 errors:0 dropped:72 overruns:0 carrier:0
          collisions:0 txqueuelen:256
          RX bytes:52140667469 (48.5 GiB) TX bytes:72573030755 (67.5 GiB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:394785 errors:0 dropped:0 overruns:0 frame:0
          TX packets:394785 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:23757752 (22.6 MiB) TX bytes:23757752 (22.6 MiB)

*-bash-3.2$ ssh compute-0-16 /sbin/ifconfig*
eth0 Link encap:Ethernet HWaddr 78:E7:D1:61:D6:72
          inet addr:10.1.255.238 Bcast:10.1.255.255 Mask:255.255.0.0
          inet6 addr: fe80::7ae7:d1ff:fe61:d672/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:85494220 errors:0 dropped:0 overruns:0 frame:0
          TX packets:84183073 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:90136414384 (83.9 GiB) TX bytes:87205444848 (81.2 GiB)
          Memory:fbe60000-fbe80000

ib0 Link encap:InfiniBand HWaddr
80:00:00:48:FE:80:00:00:00:00:00:00:00:00:00:00:00:00:00:00
          inet addr:192.168.1.16 Bcast:192.168.1.255 Mask:255.255.255.0
          inet6 addr: fe80::202:c903:a:6f91/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:116291959 errors:0 dropped:0 overruns:0 frame:0
          TX packets:130137130 errors:0 dropped:107 overruns:0 carrier:0
          collisions:0 txqueuelen:256
          RX bytes:54348901701 (50.6 GiB) TX bytes:80828495293 (75.2 GiB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:394518 errors:0 dropped:0 overruns:0 frame:0
          TX packets:394518 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:21661017 (20.6 MiB) TX bytes:21661017 (20.6 MiB)

Bibrak Qamar

On Thu, May 22, 2014 at 3:30 PM, Jeff Squyres (jsquyres) <jsquyres_at_[hidden]
> wrote:

> Can you send the output of ifconfig on both compute-0-15.local and
> compute-0-16.local?
>
>
> On May 22, 2014, at 3:30 AM, Bibrak Qamar <bibrakc_at_[hidden]> wrote:
>
> > Hi,
> >
> > I am facing problem in running Open MPI using TCP (on 1G Ethernet). In
> practice the bandwidth must not exceed 1000 Mbps but for some data points
> (for point-to-point ping pong) it exceeds this limit. I checked with MPICH
> it works as desired.
> >
> > Following is the command I issue to run my program on TCP. Am I missing
> something?
> >
> > -bash-3.2$ mpirun -np 2 -machinefile machines -N 1 --mca btl tcp,self
> ./bandwidth.ompi
> >
> --------------------------------------------------------------------------
> > The following command line options and corresponding MCA parameter have
> > been deprecated and replaced as follows:
> >
> > Command line options:
> > Deprecated: --npernode, -npernode
> > Replacement: --map-by ppr:N:node
> >
> > Equivalent MCA parameter:
> > Deprecated: rmaps_base_n_pernode, rmaps_ppr_n_pernode
> > Replacement: rmaps_base_mapping_policy=ppr:N:node
> >
> > The deprecated forms *will* disappear in a future version of Open MPI.
> > Please update to the new syntax.
> >
> --------------------------------------------------------------------------
> > Hello, world. I am 1 on compute-0-16.local
> > Hello, world. I am 0 on compute-0-15.local
> > 1 25.66 0.30
> > 2 25.54 0.60
> > 4 25.34 1.20
> > 8 25.27 2.42
> > 16 25.24 4.84
> > 32 25.49 9.58
> > 64 26.44 18.47
> > 128 26.85 36.37
> > 256 29.43 66.37
> > 512 36.02 108.44
> > 1024 42.03 185.86
> > 2048 194.30 80.42
> > 4096 255.21 122.45
> > 8192 258.85 241.45
> > 16384 307.96 405.90
> > 32768 422.78 591.32
> > 65536 790.11 632.83
> > 131072 1054.08 948.70
> > 262144 1618.20 1235.94
> > 524288 3126.65 1279.33
> >
> > -Bibrak
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>