Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-04-06 16:44:23

On Apr 6, 2010, at 4:29 PM, Oliver Geisler wrote:

> > Sorry for the delay -- I just replied on the user list -- I think the first thing to do is to establish baseline networking performance and see if that is out of whack. If the underlying network is bad, then MPI performance will also be bad.
> Could make sense. With kernel 2.6.24 it seems a major change in the
> modules for Intel PCI-Express network cards was introduced.
> Does openmpi use TCP communication, even if all processes are on the
> same local node?

It depends. :-)

The "--mca btl sm,self,tcp" option to mpirun tells Open MPI to use shared memory, tcp, and process-loopback for MPI point-to-point communications. Open MPI computes a reachability / priority map and uses the highest priority plugin that is reachable for each peer MPI process.

Meaning that on each node, if you allow "sm" to be used, "sm" should be used for all on-node communications. If you had only said --mca btl tcp,self", then you're only allowing Open MPI to use TCP for all non-self MPI point-to-point communications.

The default -- if you don't specify "--mca btl ...." at all -- is for Open MPI to figure it out automatically and use whatever networks it can find. In your case, I'm guessing that it's pretty much identical to specifying "--mca btl tcp,sm,self".

Another good raw TCP performance program that network wonks are familiar with is netperf. NetPipe is nice because it allows an apples-to-apples comparison of TCP and MPI (i.e., it's the same benchmark app that can use either TCP or MPI [or several other] transports underneath). But netperf might be a bit more familiar to those outside the HPC community.

Jeff Squyres
For corporate legal information go to: