Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] openmpi over tcp
From: Daniel De Marco (ddm_at_[hidden])
Date: 2009-01-29 14:20:02


Hi All,

I'm doing some tests on a small cluster with gigabit and infiniband
interconnects with openmpi and I'm running into the same problem as
described in the following thread:
http://www.open-mpi.org/community/lists/users/2007/04/3082.php

Basically even if I run my test with:
mpirun --mca btl tcp,self --prefix /share/apps/openmpi-1.3/gcc_ifort/
--machinefile machines -np 2 ./osu_latency
I seem to be getting infiniband transport:
# OSU MPI Latency Test v3.1.1
# Size Latency (us)
0 2.41
1 2.66
2 2.85
4 2.85
8 2.88
16 3.52
32 3.61
64 3.62
128 3.95
256 4.19
512 4.96
1024 6.31

I tried running it with --mca btl ^openib but the result is the same.
I even tried, as suggested in the thread above, to remove the *openib*
files from the lib/openmpi directory, but without any change.

I tried with 1.2.8 and with 1.3.0 with the same results.

Is there anything else I can try in order to be able to use gigabit
transport?

Thanks, Daniel.