Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Brecv vs multiple MPI_Irecv
From: Richard Treumann (treumann_at_[hidden])
Date: 2008-08-27 14:52:17


Hi Robert

Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363

users-bounces_at_[hidden] wrote on 08/27/2008 11:55:58 AM:

<< snip >>
>
> However from an application point of view I see an odd result here.
> On the sender side I can use a buffered send to "queue" messages for
> delivery and decide how many messages my buffer should contain
> before the MPI_Bsend blocks if it's running out of space.
The BSEND should not be assumed to block if the attached buffer is full.
It may raise an error which is fatal by default. The user is expected
to make sure he does not stack up more BSEND data than his attached buffer>
can hold. MPI provides guidelines on doing this and it is not hard to
get right.
>
> On the receiving side I have no control over the number of messages
> that MPI can buffer. This is basically left to the implementation
> details as you very well described. Shouldn't the user be allowed to
> specify a memory space to buffer messages on the receiving side,
> just like on the send?
On the receive side, the buffer space could be filled by messages from
multiple sources so managing it in the application can be very complex
and a semantic that said the MPI job can fail if the buffer overflows is
pretty nasty when it is so hard for applications to prevent overflow.

So, the MPI Forum chose to require that the MPI_SEND only ship data
eagerly if it "knows" there is space in the libmpi managed buffer at
the destination. If the MPI_SEND side cannot "know" there is space at
the destination, the SEND is required to block until a matching
receive is posted.

The MPI Forum concluded there are so many options in how an MPI
implementation
might do this efficiently that there is no reasonable, portable way to let
the
user control it. Also, an amount of buffer a user might decide to set
aside
for one platform may be most of the memory on another so the application
will
not port to the second platform well. User interest in having control was
recognized as MPI was developed but the Forum decided it would cause more
potential for harm that it was worth.

Many MPI implementations have a default buffer and some nonstandard way to
specify how much buffer is to be provided if you do not like the default.
Typically there is an environment variable.
>
<< snip >>