Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Don Kerr (Don.Kerr_at_[hidden])
Date: 2007-07-12 12:54:24


Interesting. So with SRQs there is no flow control, I am guessing the
btl sets some reasonable default but essentially is relying on the user
to adjust other parameters so the buffers are not over run.

And yes Galen I would like to read your paper.

Jeff Squyres wrote:

>There's a few benefits:
>
>- Remember that you post a big pool of buffers instead of num_peers
>individual sets of receive buffers. Hence, if you post M buffers for
>each of N peers, each peer -- due to flow control -- can only have M
>outstanding sends at a time. So if you have apps sending lots of
>small messages, you can get better utilization of buffer space
>because a single peer has more than M buffers to receive into.
>
>- You can also post less than M*N buffers by playing the statistics
>of your app -- if you know that you won't have more than M*N messages
>outstanding at any given time, you can post fewer receive buffers.
>
>- At the same time, there's a problem with flow control (meaning that
>there is none): how can a sender know when they have overflowed the
>receiver (other than an RNR)? So it's not necessarily as safe.
>
>- So if you want to simply eliminate the flow control, choose M high
>enough (or just a total number of receive buffers to post to the SRQ)
>that you won't ever run out of resources and you should see some
>speedup from lack of flow control. This obviously mainly helps apps
>with lots of small messages; it may not help in many other cases.
>
>
>On Jul 12, 2007, at 12:29 PM, Don Kerr wrote:
>
>
>
>>Through mca parameters one can select the use of shared receive queues
>>in the openib btl, other than having fewer queues I am wondering what
>>are the benefits of using this option. Can anyone eleborate on using
>>them vs the default?
>>
>>_______________________________________________
>>devel mailing list
>>devel_at_[hidden]
>>http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>>
>
>
>
>