Can you use non-blocking send/recv? It sounds like this will give you
the completion semantics you want.
On 2/24/11 6:07 AM, Toon Knapen wrote:
> In that case, I have a small question concerning design:
> Suppose task-based parallellism where one node (master) distributes
> work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
> allocates 2 buffers locally in which it will store all necessary data
> that is needed by the slave to perform the task. So I do an MPI_Put on
> each of my 2 buffers to send each buffer to a specific slave. Now I need
> to know when I can reuse one of my buffers to already store the next
> task (that I will MPI_Put later on). The only way to know this is call
> MPI_Complete. But since this is blocking and if this buffer is not ready
> to be reused yet, I can neither verify if the other buffer is already
> available to me again (in the same thread).
> I would very much appreciate input on how to solve such issue !
> thanks in advance,
> On Tue, Feb 22, 2011 at 7:21 PM, Barrett, Brian W <bwbarre_at_[hidden]
> <mailto:bwbarre_at_[hidden]>> wrote:
> On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
> > (Probably this issue has been discussed at length before but
> unfortunately I did not find any threads (on this site or anywhere
> else) on this topic, if you are able to provide me with links to
> earlier discussions on this topic, please do not hesitate)
> > Is there an alternative to MPI_Win_complete that does not
> 'enforce completion of preceding RMS calls at the origin' (as said
> on pag 353 of the mpi-2.2 standard) ?
> > I would like to know if I can reuse the buffer I gave to MPI_Put
> but without blocking on it, if the MPI lib is still using it, I want
> to be able to continue (and use another buffer).
> There is not. MPI_Win_complete is the only way to finish a
> MPI_Win_start epoch, and is always blocking until local completion
> of all messages started during the epoch.
> Brian W. Barrett
> Dept. 1423: Scalable System Software
> Sandia National Laboratories
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]>
> users mailing list