Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-04-06 18:09:09


On Apr 6, 2010, at 6:04 PM, Oliver Geisler wrote:

> Using netpipe and comparing tcp and mpi communication I get the
> following results:
>
> TCP is much faster than MPI, approx. by factor 12
> e.g a packet size of 4096 bytes deliveres in
> 97.11 usec with NPtcp and
> 15338.98 usec with NPmpi
> or
> packet size 262kb
> 0.05268801 sec NPtcp
> 0.00254560 sec NPmpi

Well that's not good (for us). :-\

> Further our benchmark started with "--mca btl tcp,self" runs with short
> communication times, even using kernel 2.6.33.1

I'm not sure what this statement means (^^). Can you explain?

> Is there a way to see what type of communication is actually selected?

If you "--mca btl tcp,self" is used, then TCP sockets are used for non-self communications (i.e., communications with peer MPI processes, regardless of location).

> Can anybody imagine why shared memory leads to these problems?

I'm not sure I understand this -- if "--mca btl tcp,self", shared memory is not used...?

....re-reading your email, I'm wondering: did you run the NPmpi process with "--mca btl tcp,sm,self" (or no --mca btl param)? That might explain some of my confusion, above.

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/