Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Gleb Natapov (glebn_at_[hidden])
Date: 2006-12-04 14:34:24


On Mon, Dec 04, 2006 at 11:57:07PM +0530, Chevchenkovic Chevchenkovic wrote:
> Thanks for that.
>
> Suppose, if there there are multiple interconnects, say ethernet and
> infiniband and a million byte of data is to be sent, then in this
> case the data will be sent through infiniband (since its a fast path
> .. please correct me here if i m wrong).
With default parameters yes. But you can tweak Open MPI to split
message between interconnects.

>
> If there are mulitple such sends, do you mean to say that each send
> will go through different BTLs in a RR manner if they are connected
> to the same port?
One message can be split between multiple BTLs.

>
> -chev
>
>
> On 12/4/06, Gleb Natapov <glebn_at_[hidden]> wrote:
> > On Mon, Dec 04, 2006 at 10:53:26PM +0530, Chevchenkovic Chevchenkovic wrote:
> > > Hi,
> > > It is not clear from the code as mentioned by you from
> > > ompi/mca/pml/ob1/ where exactly the selection of BTL bound to a
> > > particular LID occurs. Could you please specify the file/function name
> > > for the same?
> > There is no such code there. OB1 knows nothing about LIDs. It does RR
> > over all available interconnects. It can do RR between ethernet, IB
> > and Myrinet for instance. BTL presents each LID as different virtual HCA
> > to OB1 and it does round-robin between them without event knowing this
> > is the same port of the same HCA.
> >
> > Can you explain what are you trying to achieve?
> >
> > > -chev
> > >
> > >
> > > On 12/4/06, Gleb Natapov <glebn_at_[hidden]> wrote:
> > > > On Mon, Dec 04, 2006 at 01:07:08AM +0530, Chevchenkovic Chevchenkovic wrote:
> > > > > Also could you please tell me which part of the openMPI code needs to
> > > > > be touched so that I can do some modifications in it to incorporate
> > > > > changes regarding LID selection...
> > > > >
> > > > It depend what do you want to do. The part that does RR over all
> > > > available LIDs is in OB1 PML (ompi/mca/pml/ob1/), but the code doesn't
> > > > aware of the fact that it is doing RR over different LIDs and not
> > > > different NICs (yet?).
> > > >
> > > > The code that controls what LIDs will be used is in
> > > > ompi/mca/btl/openib/btl_openib_component.c.
> > > >
> > > > > On 12/4/06, Chevchenkovic Chevchenkovic <chevchenkovic_at_[hidden]> wrote:
> > > > > > Is it possible to control the LID where the send and recvs are
> > > > > > posted.. on either ends?
> > > > > >
> > > > > > On 12/3/06, Gleb Natapov <glebn_at_[hidden]> wrote:
> > > > > > > On Sun, Dec 03, 2006 at 07:03:33PM +0530, Chevchenkovic Chevchenkovic
> > > > > > wrote:
> > > > > > > > Hi,
> > > > > > > > I had this query. I hope some expert replies to it.
> > > > > > > > I have 2 nodes connected point-to-point using infiniband cable. There
> > > > > > > > are multiple LIDs for each of the end node ports.
> > > > > > > > When I give an MPI_Send, Are the sends are posted on different LIDs
> > > > > > > > on each of the end nodes OR they are they posted on the same LID?
> > > > > > > > Awaiting your reply,
> > > > > > > It depend what version of Open MPI your are using. If you are using
> > > > > > > trunk or v1.2 beta then all available LIDs are used in RR fashion. The
> > > > > > early
> > > > > > > versions don't support LMC.
> > > > > > >
> > > > > > > --
> > > > > > > Gleb.
> > > > > > > _______________________________________________
> > > > > > > users mailing list
> > > > > > > users_at_[hidden]
> > > > > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > > > > >
> > > > > >
> > > > > _______________________________________________
> > > > > users mailing list
> > > > > users_at_[hidden]
> > > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > >
> > > > --
> > > > Gleb.
> > > > _______________________________________________
> > > > users mailing list
> > > > users_at_[hidden]
> > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > > >
> > > _______________________________________________
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > --
> > Gleb.
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
			Gleb.