Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] TIPC BTL code ready for review
From: Xin He (xin.i.he_at_[hidden])
Date: 2011-09-01 07:05:51


hi, I found the reason. It is because besides the direct links between 2
PCs, there is another link going through many switches and TCP BTL seems
to use
this slower link. So I run again with eth0 only.

So I build ompi with: ./configure --disable-mpi-f90 --disable-mpi-f77
--disable-mpi-cxx --disable-vt --disable-io-romio --prefix=/usr
--with-platform=optimized
And ran with : mpirun -n 6 --mca btl tcp,self --mca btl_tcp_if_include
eth0 -hostfile my_hostfile --bynode ./IMB-MPI1 > tcp_0901

And get the result as in appendix. It seems that TCP has better
performances with smaller message while TIPC with larger message.

/Xin

On 08/30/2011 05:50 PM, Jeff Squyres wrote:
> On Aug 29, 2011, at 3:51 AM, Xin He wrote:
>
>>> -----
>>> $ mpirun --mca btl tcp,self --bynode -np 2 --mca btl_tcp_if_include eth0 hostname
>>> svbu-mpi008
>>> svbu-mpi009
>>> $ mpirun --mca btl tcp,self --bynode -np 2 --mca btl_tcp_if_include eth0 IMB-MPI1 PingPong
>>> #---------------------------------------------------
>>> # Intel (R) MPI Benchmark Suite V3.2, MPI-1 part
>>> #---------------------------------------------------
>>>
>> Hi, I think these models are reasonably new :)
>> The result I gave you, they are tested on 2 processes but on 2 different servers. I get that the result you showed is 2 processes on one machine?
> Nope -- check my output -- I'm running across 2 different servers and through a 1GB TOR ethernet switch (it's not a particularly high-performance ethernet switch, either).
>
> Can you run some native netpipe TCP numbers across the same nodes that you ran the TIPC MPI tests over? You should be getting lower latency than what you're seeing.
>
> Do you have jumbo frames enabled, perchance? Are you going through only 1 switch? If you're on a NUMA server, do you have processor affinity enabled, and have the processes located "near" the NIC?
>
>> BTW, I forgot to tell you about SM& TIPC. Unfortunately, TIPC does not beat SM...
> That's probably not surprising; SM is tuned pretty well specifically for MPI communication across shared memory.
>