If you use only non-blocking communications I don't see how the behavior you describe can happen in the code of Open MPI. There might be something else going on there.
On Apr 27, 2013, at 00:14 , Stephan Wolf <wolfst_at_[hidden]> wrote:
> I have encountered really bad performance when all the nodes send data
> to all the other nodes. I use Isend and Irecv with multiple
> outstanding sends per node. I debugged the behavior and came to the
> following conclusion: It seems that one sender locks out all other
> senders for one receiver. This sender releases the receiver only when
> there are no more sends posted or a node with lower rank, wants to
> send to this node (deadlock prevention). As a consequence, node 0
> sends all its data to all nodes, while all others are waiting, then
> node 1 sends all the data,
> What is the rationale behind this behaviour and can I change it by
> some MCA parameter?
> users mailing list