From: Jeff Squyres <firstname.lastname@example.org>
To: Open MPI Users <email@example.com>
Cc: Mudassar Majeed <firstname.lastname@example.org>
Sent: Friday, June 1, 2012 4:52 PM
Subject: Re: [OMPI users] Intra-node communication
...and exactly how you measured. You might want to run a well-known benchmark, like NetPIPE or the OSU pt2pt benchmarks.
Note that the *first* send between any given peer pair is likely to be slow because OMPI does a lazy connection scheme (i.e., the connection is made behind the scenes). Subsequent sends are likely faster. Well-known benchmarks do a bunch of warmup sends and then start timing after those are all done.
Also ensure that you have shared memory support enabled. It is likely to be enabled by default, but if you're seeing different performance than you expect, that's something to check.
On Jun 1, 2012, at 10:44 AM, Jingcha Joba wrote:
> This should not happen. Typically, Intra node communication latency are way way cheaper than inter node.
> Can you please tell us how u ran your application ?
> Sent from my iPhone
> On Jun
1, 2012, at 7:34 AM, Mudassar Majeed <email@example.com
>> Dear MPI people,
>> Can someone tell me why MPI_Ssend takes more time when two MPI processes are on same node ...... ?? the same two processes on different nodes take much less time for the same message exchange. I am using a supercomputing center and this happens. I was writing an algorithm to reduce the across node communication. But now I found that across node communication is cheaper than communication within a node (with 8 cores on each node).
>> best regards,
>> users mailing list
> users mailing list
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/