Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] MPI_Alltoallv and unknown data send sizes
From: Daniel Spångberg (daniels_at_[hidden])
Date: 2008-09-10 08:46:19

Dear all,

First some background, the real question is at the end of this (longish)

I have a problem where I need to exchange data between all processes. The
data is unevenly distributed and I thought at first I could use
MPI_Alltoallv to transfer the data. However, in my case, the receivers do
not know how many data items the senders will send, but it is relatively
easy to set up so the receiver can figure out the maximum number of items
the sender will send, so I set the recvcounts to the maximum possible, and
the sendcounts to the actual number of elements (smaller than recvcounts).

The mpi-forum description (from describes the

MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts,
rdispls, recvtype, comm)
IN sendbuf starting address of send buffer (choice)
IN sendcounts integer array equal to the group size specifying the number
of elements to send to each processor
IN sdispls integer array (of length group size). Entry j specifies the
displacement (relative to sendbuf) from which to take the outgoing data
destined for process j
IN sendtype data type of send buffer elements (handle)
OUT recvbuf address of receive buffer (choice)
IN recvcounts integer array equal to the group size specifying the number
of elements that can be received from each processor
IN rdispls integer array (of length group size). Entry i specifies the
displacement (relative to recvbuf) at which to place the incoming data
 from process i
IN recvtype data type of receive buffer elements (handle)
IN comm communicator (handle)

In particular the wording is "the number of elements that can be received
 from each processor" for recvcounts, and does not say that this must be
exactly the same as the number of elements sent.

It also mentions that it should work similarly as a number of independent
MPI_Send/MPI_Recv calls. The amount of data sent in such a case does not
need to exactly match the amount of data received.

I, unfortunately, missed the following:

The type signature associated with sendcounts[j], sendtypes[j] at process
i must be equal to the type signature associated with recvcounts[i],
recvtypes[i] at process j. This implies that the amount of data sent must
be equal to the amount of data received, pairwise between every pair of
processes. Distinct type maps between sender and receiver are still

And the openmpi man page shows
        When a pair of processes exchanges data, each may pass different
        ment count and datatype arguments so long as the sender specifies
        same amount of data to send (in bytes) as the receiver
expects to

I did test my program on different send/recv counts, and while it
sometimes works, sometimes it does not. Even if it worked I would not be
comfortable using it anyway.

The question is: If there is no way of determining the length of the data
sent by the sender on the receiving end, I see two options: Either always
transmit too much data using MPI_Alltoall(v) or cook up my own routine
based on PTP calls, probably MPI_Sendrecv is the best option. Am I missing

Daniel Spångberg
Materials Chemistry
Uppsala University