Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] openmpi over tcp
From: Ralph Castain (rhc_at_[hidden])
Date: 2009-01-29 14:25:56


It is quite likely that you have IPoIB on your system. In that case,
the TCP BTL will pickup that interface and use it.

If you have a specific interface you want to use, try -mca
btl_tcp_if_include eth0 (or whatever that interface is). This tell the
TCP BTL to only use the specified interface, so it will either fail
(if that interface isn't available or doesn't exist) or use only that
one.

On Jan 29, 2009, at 12:20 PM, Daniel De Marco wrote:

> Hi All,
>
> I'm doing some tests on a small cluster with gigabit and infiniband
> interconnects with openmpi and I'm running into the same problem as
> described in the following thread:
> http://www.open-mpi.org/community/lists/users/2007/04/3082.php
>
> Basically even if I run my test with:
> mpirun --mca btl tcp,self --prefix /share/apps/openmpi-1.3/gcc_ifort/
> --machinefile machines -np 2 ./osu_latency
> I seem to be getting infiniband transport:
> # OSU MPI Latency Test v3.1.1
> # Size Latency (us)
> 0 2.41
> 1 2.66
> 2 2.85
> 4 2.85
> 8 2.88
> 16 3.52
> 32 3.61
> 64 3.62
> 128 3.95
> 256 4.19
> 512 4.96
> 1024 6.31
>
> I tried running it with --mca btl ^openib but the result is the same.
> I even tried, as suggested in the thread above, to remove the *openib*
> files from the lib/openmpi directory, but without any change.
>
> I tried with 1.2.8 and with 1.3.0 with the same results.
>
> Is there anything else I can try in order to be able to use gigabit
> transport?
>
> Thanks, Daniel.
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users