Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] nonblock alternative to MPI_Win_complete
From: James Dinan (dinan_at_[hidden])
Date: 2011-02-24 08:05:50


Hi Toon,

Can you use non-blocking send/recv? It sounds like this will give you
the completion semantics you want.

Best,
  ~Jim.

On 2/24/11 6:07 AM, Toon Knapen wrote:
> In that case, I have a small question concerning design:
> Suppose task-based parallellism where one node (master) distributes
> work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
> allocates 2 buffers locally in which it will store all necessary data
> that is needed by the slave to perform the task. So I do an MPI_Put on
> each of my 2 buffers to send each buffer to a specific slave. Now I need
> to know when I can reuse one of my buffers to already store the next
> task (that I will MPI_Put later on). The only way to know this is call
> MPI_Complete. But since this is blocking and if this buffer is not ready
> to be reused yet, I can neither verify if the other buffer is already
> available to me again (in the same thread).
> I would very much appreciate input on how to solve such issue !
> thanks in advance,
> toon
> On Tue, Feb 22, 2011 at 7:21 PM, Barrett, Brian W <bwbarre_at_[hidden]
> <mailto:bwbarre_at_[hidden]>> wrote:
>
> On Feb 18, 2011, at 8:59 AM, Toon Knapen wrote:
>
> > (Probably this issue has been discussed at length before but
> unfortunately I did not find any threads (on this site or anywhere
> else) on this topic, if you are able to provide me with links to
> earlier discussions on this topic, please do not hesitate)
> >
> > Is there an alternative to MPI_Win_complete that does not
> 'enforce completion of preceding RMS calls at the origin' (as said
> on pag 353 of the mpi-2.2 standard) ?
> >
> > I would like to know if I can reuse the buffer I gave to MPI_Put
> but without blocking on it, if the MPI lib is still using it, I want
> to be able to continue (and use another buffer).
>
>
> There is not. MPI_Win_complete is the only way to finish a
> MPI_Win_start epoch, and is always blocking until local completion
> of all messages started during the epoch.
>
> Brian
>
> --
> Brian W. Barrett
> Dept. 1423: Scalable System Software
> Sandia National Laboratories
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden] <mailto:users_at_[hidden]>
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users