Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] send and receive buffer the same on root
From: Richard Treumann (treumann_at_[hidden])
Date: 2010-09-16 13:36:23


You are depending on luck. The MPI Standard allows the implementation to
assume that send and recv buffers are distinct unless MPI_IN_PLACE is
used. Any MPI implementation may have more than one algorithm for a given
MPI collective communication operation and the policy for switching
algorithm is not documented.

It is entirely possible that something like going from 32 to 64 processes
or changing interconnects will cause a different algorithm to be used.
Applying a patch could also cause the algorithm to be changed.

In theory one algorithm could let you get away with the violation while
another trips on it and a change you do not even realize you made cause
bad answers to show up. Perhaps some algorithm uses space in the receive
buffer as scratch.

Standards compliant code is safer.


Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

Tom Rosmond <rosmond_at_[hidden]>
09/16/2010 12:05 PM
[OMPI users] send and receive buffer the same on root
Sent by:

I am working with a Fortran 90 code with many MPI calls like this:

call mpi_gatherv(x,nsize(rank+1),

'x' is allocated on root to be large enough to hold the results of the
gather, other arrays and parameters are defined correctly, and the code
runs as it should. However, I am concerned that having the same send
and receive buffer on root is a violation of the MPI standard. Am I
correct? I am aware of the MPI_IN_PLACE feature that can be used in
this situation, by defining it as the send buffer at root.

The fact that the code as written seems to work on most system we run on
(some with OpenMPI, some with proprietary MPI's) indicates that in spite
of the standard, implementations allow it. Is this correct, or are we
just lucky.

T. Rosmond

users mailing list