Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OMPI Coll Framework and RDMA
From: Jingcha Joba (pukkimonkey_at_[hidden])
Date: 2013-06-07 14:11:07


Interesting.

I would like to understand more on how QP implementation in OpenMPI, for
example, heuristics behind creating multiple QPs between two mpi processes.

Is there any whitepaper / reference / manual that I can refer to for that?
Or can you point me to the source code region for this?

Thanks,
--Joba

On Fri, Jun 7, 2013 at 8:09 AM, Shamis, Pavel <shamisp_at_[hidden]> wrote:

>
> Does that mean, if there is a AllGatherV and assuming that every process
> belongs to default comm, there will n-1 Queue Pair between the collecting
> process and other processes ?
> n = total number of MPI processes.
>
> The answer depends on multiple parameters, like number of processes,
> message size, etc. Some algorithms will require O(log(n)) connections other
> O(n).
> Also on OpenIB btl level per rank we create multiple QPs , not just one.
> To make things even more complicated :-) there are multiple types of QPs,
> like RC and XRC.
>
> -Pasha
>
>
>
> --
> Joba
>
> On Thu, Jun 6, 2013 at 3:37 PM, Shamis, Pavel <shamisp_at_[hidden]<mailto:
> shamisp_at_[hidden]>> wrote:
> Default implementation of collectives is based on PML (p2p layer) that is
> implemented on top of BTL.
> Consequently it laverages RDMA capabilities to some extend.
>
> Pavel (Pasha) Shamis
> ---
> Computer Science Research Group
> Computer Science and Math Division
> Oak Ridge National Laboratory
>
>
>
>
>
>
> On Jun 6, 2013, at 1:59 PM, Jingcha Joba <pukkimonkey_at_[hidden]<mailto:
> pukkimonkey_at_[hidden]><mailto:pukkimonkey_at_[hidden]<mailto:
> pukkimonkey_at_[hidden]>>> wrote:
>
> Hi,
>
> I have a quick question.
>
> Is there an openib (in btl framework) equivalent in coll framework?
>
> I have an MPI application with gatherv and scatterv. I am wondering if I
> can leverage RDMA capabilities of the underlying Infiniband fabric.
>
>
> Thanks,
> --
> Joba
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]><mailto:users_at_[hidden]
> <mailto:users_at_[hidden]>>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]<mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>