Open MPI logo

MPI_Alltoallw(3) man page (version 1.2.9)

  |   Home   |   Support   |   FAQ   |  

« Return to documentation listing



NAME

       MPI_Alltoallw  -  All  processes  send  data of different types to, and
       receive data of different types from, all processes

SYNTAX


C Syntax

       #include <mpi.h>
       int MPI_Alltoallw(void *sendbuf, int *sendcounts,
            int *sdispls, MPI_Datatype *sendtypes,
            void *recvbuf, int *recvcounts,
            int *rdispls, MPI_Datatype *recvtypes, MPI_Comm comm)

Fortran Syntax

       INCLUDE 'mpif.h'
       MPI_ALLTOALLW(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPES,
            RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPES, COMM, IERROR)

            <type>    SENDBUF(*), RECVBUF(*)
            INTEGER   SENDCOUNTS(*), SDISPLS(*), SENDTYPES(*)
            INTEGER   RECVCOUNTS(*), RDISPLS(*), RECVTYPES(*)
            INTEGER   COMM, IERROR

C++ Syntax

       #include <mpi.h>
       void MPI::Comm::Alltoallw(const void* sendbuf,
            const int sendcounts[], const int sdispls[],
            const MPI::Datatype sendtypes[], void* recvbuf,
            const int recvcounts[], const int rdispls[],
            const MPI::Datatype recvtypes[])

INPUT PARAMETERS

       sendbuf     Starting address of send buffer.

       sendcounts  Integer array, where entry i specifies the number  of  ele-
                   ments to send to rank i.

       sdispls     Integer array, where entry i specifies the displacement (in
                   bytes, offset from sendbuf) from which to send data to rank
                   i.

       sendtypes   Datatype array, where entry i specifies the datatype to use
                   when sending data to rank i.

       recvcounts  Integer array, where entry j specifies the number  of  ele-
                   ments to receive from rank j.

       rdispls     Integer array, where entry j specifies the displacement (in
                   bytes, offset from recvbuf)  to  which  data  from  rank  j
                   should be written.

       recvtypes   Datatype array, where entry j specifies the datatype to use
                   when receiving data from rank j.

       comm        Communicator over which data is to be exchanged.

DESCRIPTION

       MPI_Alltoallw is a generalized collective operation in which  all  pro-
       cesses  send data to and receive data from all other processes. It adds
       flexibility to MPI_Alltoallv  by  allowing  the  user  to  specify  the
       datatype  of  individual  data  blocks (in addition to displacement and
       element count). Its operation can be thought of in the  following  way,
       where each process performs 2n (n being the number of processes in com-
       municator comm) independent  point-to-point  communications  (including
       communication with itself).

            MPI_Comm_size(comm, &n);
            for (i = 0, i < n; i++)
                MPI_Send(sendbuf + sdispls[i], sendcounts[i],
                    sendtypes[i], i, ..., comm);
            for (i = 0, i < n; i++)
                MPI_Recv(recvbuf + rdispls[i], recvcounts[i],
                    recvtypes[i], i, ..., comm);

       Process j sends the k-th block of its local sendbuf to process k, which
       places the data in the j-th block of its local recvbuf.

       When a pair of processes exchanges data, each may pass  different  ele-
       ment  count  and datatype arguments so long as the sender specifies the
       same amount of data to send (in  bytes)  as  the  receiver  expects  to
       receive.

       Note  that  process  i may send a different amount of data to process j
       than it receives from process j. Also, a process may send entirely dif-
       ferent amounts and types of data to different processes in the communi-
       cator.

       WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR

       When the communicator is an inter-communicator,  the  gather  operation
       occurs in two phases.  The data is gathered from all the members of the
       first group and received by all the members of the second group.   Then
       the  data  is  gathered  from  all  the members of the second group and
       received by all the members of the first.   The  operation  exhibits  a
       symmetric, full-duplex behavior.

       The  first  group  defines  the  root  process.   The root process uses
       MPI_ROOT as the value of root.  All other processes in the first  group
       use  MPI_PROC_NULL  as  the value of root.  All processes in the second
       group use the rank of the root process in the first group as the  value
       of root.

       When  the  communicator  is an intra-communicator, these groups are the
       same, and the operation occurs in a single phase.

NOTES

       The MPI_IN_PLACE option is not available for  any  form  of  all-to-all
       communication.

       The  specification of counts, types, and displacements should not cause
       any location to be written more than once.

       All arguments on all processes are significant. The comm  argument,  in
       Almost  all MPI routines return an error value; C routines as the value
       of the function and Fortran routines in the last  argument.  C++  func-
       tions  do  not  return  errors.  If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI:Exception object.

       Before  the  error  value is returned, the current MPI error handler is
       called. By default, this error handler aborts the MPI job,  except  for
       I/O   function   errors.   The   error  handler  may  be  changed  with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may  be  used  to cause error values to be returned. Note that MPI does
       not guarantee that an MPI program can continue past an error.

SEE ALSO

       MPI_Alltoall
       MPI_Alltoallv

Open MPI 1.2                    September 2006         MPI_Alltoallw(3OpenMPI)

« Return to documentation listing