Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Strange "All-to-All" behavior
From: Number Cruncher (number.cruncher_at_[hidden])
Date: 2013-04-30 17:12:19

This sounds a bit like the All_to_allv algorithm change I complained
about when 1.6.1 was released.

Original post:
Everything waits for "rank 0" observation:

Does switching to the older algorithm help?:
mpiexec --mca coll_tuned_use_dynamic_rules 1 --mca
coll_tuned_alltoallv_algorithm 1


On 26/04/2013 23:14, Stephan Wolf wrote:
> Hi,
> I have encountered really bad performance when all the nodes send data
> to all the other nodes. I use Isend and Irecv with multiple
> outstanding sends per node. I debugged the behavior and came to the
> following conclusion: It seems that one sender locks out all other
> senders for one receiver. This sender releases the receiver only when
> there are no more sends posted or a node with lower rank, wants to
> send to this node (deadlock prevention). As a consequence, node 0
> sends all its data to all nodes, while all others are waiting, then
> node 1 sends all the data, …
> What is the rationale behind this behaviour and can I change it by
> some MCA parameter?
> Thanks
> Stephan
> _______________________________________________
> users mailing list
> users_at_[hidden]