Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Galen Shipman (gshipman_at_[hidden])
Date: 2007-07-12 12:38:16

On Jul 12, 2007, at 10:29 AM, Don Kerr wrote:

> Through mca parameters one can select the use of shared receive queues
> in the openib btl, other than having fewer queues I am wondering what
> are the benefits of using this option. Can anyone eleborate on using
> them vs the default?
In the trunk the number of queue pairs is the same, regardless of SRQ
or NON-SRQ hence forth named PP (per-peer).
The difference is that PP receive resources scale with the number of
active QP connections. SRQ receive resources do not.
So the real difference is the memory footprint of the the receive
resources. SRQ is potentially much smaller. This comes at a cost; SRQ
does not have flow control as we cannot reserve resources for a
particular peer, so we do have the possibility of an RNR (receiver
not ready) NAK if all the shared receive resources are consumed and
some peer is still transmitting messages. This has a performance
penalty as an RNR NAK stalls the IB pipeline. With PP, we can
guarantee that resources are available to the peer and thereby avoid
RNR (although there is a bug in the trunk right now in that sometimes
we get RNR even with PP, but this is being worked on).

I have been working on a modification to the OpenIB BTL which allows
the user to specify SRQ and PP QPs arbitrarily. That is we can use a
mix of PP and SRQ with a mix of receive sizes for each. This is
coming into the trunk very soon, perhaps tomorrow but we need to
verify the branch with some additional testing.

I hope this helps, I have a paper at EuroPVM/MPI that discusses much
of this, I will send you a copy off list.

- Galen

> _______________________________________________
> devel mailing list
> devel_at_[hidden]