Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Intra-node communication
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-06-01 12:12:12

NetPIPE is a 2 process, pt2pt benchmark.

Run it with 2 processes both on one node, and then 2 processes with each on different nodes.

On Jun 1, 2012, at 12:10 PM, Mudassar Majeed wrote:

> Dear Jeff,
> Can you suggest me a quick guide that can help me testing specifically the across and within node communication. I have some submission today so have no time for googling. If the benchmark tells me the right thing then I will do something accordingly.
> best regards,
> Mudassar
> From: Jeff Squyres <jsquyres_at_[hidden]>
> To: Open MPI Users <users_at_[hidden]>
> Cc: Mudassar Majeed <mudassarm30_at_[hidden]>
> Sent: Friday, June 1, 2012 4:52 PM
> Subject: Re: [OMPI users] Intra-node communication
> ...and exactly how you measured. You might want to run a well-known benchmark, like NetPIPE or the OSU pt2pt benchmarks.
> Note that the *first* send between any given peer pair is likely to be slow because OMPI does a lazy connection scheme (i.e., the connection is made behind the scenes). Subsequent sends are likely faster. Well-known benchmarks do a bunch of warmup sends and then start timing after those are all done.
> Also ensure that you have shared memory support enabled. It is likely to be enabled by default, but if you're seeing different performance than you expect, that's something to check.
> On Jun 1, 2012, at 10:44 AM, Jingcha Joba wrote:
> > This should not happen. Typically, Intra node communication latency are way way cheaper than inter node.
> > Can you please tell us how u ran your application ?
> > Thanks
> >
> > --
> > Sent from my iPhone
> >
> > On Jun 1, 2012, at 7:34 AM, Mudassar Majeed <mudassarm30_at_[hidden]> wrote:
> >
> >> Dear MPI people,
> >> Can someone tell me why MPI_Ssend takes more time when two MPI processes are on same node ...... ?? the same two processes on different nodes take much less time for the same message exchange. I am using a supercomputing center and this happens. I was writing an algorithm to reduce the across node communication. But now I found that across node communication is cheaper than communication within a node (with 8 cores on each node).
> >>
> >> best regards,
> >>
> >> Mudassar
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >>
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> >
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:

Jeff Squyres
For corporate legal information go to: