I've looked in more detail at the current two MPI_Alltoallv algorithms
and wanted to raise a couple of ideas.
Firstly, the new default "pairwise" algorithm.
* There is no optimisation for sparse/empty messages, compare to the old
basic "linear" algorithm.
* The attached "pairwise-nop" patch adds this optimisation and on the
test case I first described in this thread (1000's of small, sparse,
all-to-all), this cuts runtime by approximately 30%
* I think the upper bound on the loop counter for pairwise exchange is
off-by-one. As the comment notes "starting from 1 since local exhange
[sic] is done"; but when step = (size + 1), the sendto/recvfrom both
reduce to rank (self-exchange is already handled in earlier code)
The pairwise algorithm still kills performance on my gigabit ethernet
network. My message transmission time must be small compared to latency,
and the forced MPI_Comm_size() synchronisation steps introduce a minimum
delay (single_link_latency * comm_size), i.e. latency scale linearly
with comm_size. The linear algorithm doesn't wait for each exchange, so
its minimum latency is just a single transmit/receive.
Which brings me to the second idea. The problem with the existing
implementation of the linear algorithm is that the irecv/isend pattern
was identical on all processes, meaning that every process starts by
having to wait for process 0 to send to everyone and every process can
finish waiting for rank (size-1) to send to everyone.
It seems preferable to at least post the send/recv requests in the same
order as the pairwise algorithm. The attached "linear-alltoallv" patch
implements this and, on my test case, shows some modest 5% improvement.
I was wondering if it would address the concerns which led to the switch
of default algorithm.