Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Alex Tumanov (atumanov_at_[hidden])
Date: 2007-02-02 11:22:20


That really did fix it, George:

# mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
Hello from Alex' MPI test program
Process 0 on dr11.lsf.platform.com out of 2
Hello from Alex' MPI test program
Process 1 on compute-0-0.local out of 2

It never occurred to me that the headnode would try to communicate
with the slave using infiniband interfaces... Orthogonally, what are
the industry standard OpenMPI benchmark tests I could run to perform a
real test?

Thanks,
Alex.

On 2/2/07, George Bosilca <bosilca_at_[hidden]> wrote:
> Alex,
>
> Can should try to limit the ethernet devices used by Open MPI during
> the execution. Please add "--mca btl_tcp_if_exclude eth1,ib0,ib1" to
> your mpirun command line and give it a try.
>
> Thanks,
> george.
>