This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
> (Probably this issue has been discussed at length before but unfortunately I did not find any threads (on this site or anywhere else) on this topic, if you are able to provide me with links to earlier discussions on this topic, please do not hesitate)
> Is there an alternative to MPI_Win_complete that does not 'enforce completion of preceding RMS calls at the origin' (as said on pag 353 of the mpi-2.2 standard) ?
> I would like to know if I can reuse the buffer I gave to MPI_Put but without blocking on it, if the MPI lib is still using it, I want to be able to continue (and use another buffer).
There is not. MPI_Win_complete is the only way to finish a MPI_Win_start epoch, and is always blocking until local completion of all messages started during the epoch.
Brian W. Barrett
Dept. 1423: Scalable System Software
Sandia National Laboratories