Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] nonblocking send/receive question
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-10-12 13:12:06


The code you showed was incorrect -- you were waiting on an uninitialized variable. Perhaps that code was only a snipit...?

On Oct 12, 2010, at 8:00 AM, Ed Peddycoart wrote:

> Actually, that wasn't the problem. My code is working now with no changes to it. Not sure what the problem was but it wasn't the called to MPI_Send blocking.
> Ed
>
>
> From: users-bounces_at_[hidden] on behalf of Jeff Squyres
> Sent: Tue 10/12/2010 6:52 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] nonblocking send/receive question
>
> On Oct 11, 2010, at 1:29 PM, Bowen Zhou wrote:
>
> > Try MPI_Isend?
>
> 'zactly correct.
>
> You currently have an MPI_Wait on the sender side for no reason -- the request is only filled in on the receiver. So you're waiting on an uninitialized variable on the sender.
>
> MPI_Send is a "blocking" send. MPI_Isend is a non-blocking send.
> MPI_Recv is a blocking receiver. MPI_Irecv is a non-blocking receiver.
>
> MPI_Send is more-or-less equivalent to MPI_Isend immediately followed by an MPI_Wait on the corresponding request. Ditto with MPI_Recv / MPI_Irecv.
>
>
> >
> >> I have a glut application I am trying to add MPI to. In the display callback, for rank >= 1, I want to send data to the rank =0 process. I am not concerned at this point about sending data from the rank 0 process back to the rank >= 1 process, so my data is one direction. I would like to do this with non-blocking send/receive but I am not having much success.
> >> Within my display callback I do the following:
> >> if( myrank == 0 ) {
> >> MPI_Irecv( receiveData, DATA_SIZE, MPI_DOUBLE, 1, 19, MPI_COMM_WORLD, &request );
> >> MPI_Wait( &request, &status );
> >> }
> >> else if( myrank == 1 ) {
> >> /* Post a receive, send a message, then wait */
> >> MPI_Send( sendData, DATA_SIZE, MPI_DOUBLE, 0, 19, MPI_COMM_WORLD );
> >> MPI_Wait( &request, &status );
> >> }
> >> But it appears that the app is still blocking after the MPI_Send.... (I have various debug prints in the actual code, this is stripped down for ease of reading). A sample app that i have that does this works... Is doing this from the glut display call back causing the problem? Any suggestions would be greatly appreciated.
> >> Thanks,
> >> Ed
> >> ------------------------------------------------------------------------
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/