Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres \(jsquyres\) (jsquyres_at_[hidden])
Date: 2006-05-12 08:27:27

> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of Gurhan Ozen
> Sent: Thursday, May 11, 2006 4:11 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] Open MPI and OpenIB
> At any rate though, --mca btl ib,self looks like the traffic goes over
> ethernet device .. I couldn't find any documentation on the "self"
> argument of mca, does it mean to explore alternatives if the desired
> btl (in this case ib) doesn't work?

Note that Open MPI still does use TCP for "setup" information; a bunch
of data is passed around via mpirun and MPI_INIT for all the processes
to find each other, etc. Similar control messages get passed around
during MPI_FINALIZE as well.

This is likely the TCP traffice that you are seeing. However, rest
assured that the btl MCA parameter will unequivocally set the network
that MPI traffic will use.

I've updated the on-line FAQ with regards to the "self" BTL module.

And finally, a man page is available for mpirun in the [not yet
released] Open MPI 1.1 (see
It should be pretty much the same for 1.0. One notable difference is I
just recently added a -nolocal option (not yet on the trunk, but likely
will be in the not-distant future) that does not exist in 1.0.

Jeff Squyres
Server Virtualization Business Unit
Cisco Systems