NetPIPE is a 2 process, pt2pt benchmark.
Run it with 2 processes both on one node, and then 2 processes with each on different nodes.
On Jun 1, 2012, at 12:10 PM, Mudassar Majeed wrote:
> Dear Jeff,
> Can you suggest me a quick guide that can help me testing specifically the across and within node communication. I have some submission today so have no time for googling. If the benchmark tells me the right thing then I will do something accordingly.
> best regards,
> From: Jeff Squyres <jsquyres_at_[hidden]>
> To: Open MPI Users <users_at_[hidden]>
> Cc: Mudassar Majeed <mudassarm30_at_[hidden]>
> Sent: Friday, June 1, 2012 4:52 PM
> Subject: Re: [OMPI users] Intra-node communication
> ...and exactly how you measured. You might want to run a well-known benchmark, like NetPIPE or the OSU pt2pt benchmarks.
> Note that the *first* send between any given peer pair is likely to be slow because OMPI does a lazy connection scheme (i.e., the connection is made behind the scenes). Subsequent sends are likely faster. Well-known benchmarks do a bunch of warmup sends and then start timing after those are all done.
> Also ensure that you have shared memory support enabled. It is likely to be enabled by default, but if you're seeing different performance than you expect, that's something to check.
> On Jun 1, 2012, at 10:44 AM, Jingcha Joba wrote:
> > This should not happen. Typically, Intra node communication latency are way way cheaper than inter node.
> > Can you please tell us how u ran your application ?
> > Thanks
> > --
> > Sent from my iPhone
> > On Jun 1, 2012, at 7:34 AM, Mudassar Majeed <mudassarm30_at_[hidden]> wrote:
> >> Dear MPI people,
> >> Can someone tell me why MPI_Ssend takes more time when two MPI processes are on same node ...... ?? the same two processes on different nodes take much less time for the same message exchange. I am using a supercomputing center and this happens. I was writing an algorithm to reduce the across node communication. But now I found that across node communication is cheaper than communication within a node (with 8 cores on each node).
> >> best regards,
> >> Mudassar
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> Jeff Squyres
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/