Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI Blocking Routines and Memory Leaks
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-03-25 14:20:49


On Mar 25, 2009, at 9:55 AM, Simon Köstlin wrote:

> I'm new to MPI and I've got a question about blocking routines like
> the Send-, Wait-Function and so on. I wrote a parallel program that
> uses the blocking Send and the Nonblocking Isend function. Now my
> question: If I'm sending something with the blocking Send function
> it should block the process until the other process received the
> message. I think that works so far. But I'm also sending a message
> to the process itself and my programm doesn't block. So does MPI not
> block if I'm sending a message to the same process from which I'm
> sending the message and it is a blocking routine? The same happens
> if I'm sending with a non-blocking Isend and do a request.Wait() on
> the send request after each send operation. So it doesn't block if
> I'm sending the message to itself. I'm wondering about that because
> the Recv function will occur only after all messages have been sent.
> It's ok that it works, because I need to send a message to the
> process itself for simplicity. I'm only wondering why this works.

Eugene gave pretty good answers here. Only thing I have to add is
that if you need a guarantee that the receiver has *started* to
receive the message before the sender returns, you can use MPI_SSEND
(synchronous send). It does not guarantee that the receiver has fully
received all the data -- it only indicates that the receiver has
posted a matching MPI receive.

> Another question I have is about a memory leak. I got a heavy memory
> leak if I did not a request.Wait() on the send request before the
> Isend function and didn't wait until the last Isend operation
> completed. But all messages were arrived if I do the request.Wait()
> or not. Now I'm doing a request.Wait() before each Isend function
> and my memory isn't increasing much, but still a bit.

I'm not quite sure what you're saying here. For every non-blocking
send/receive, you must do a corresponding wait or test (assuming test
indicates that the request completed). So if you're re-using the same
request handle for the next ISEND, you're effectively orphaning it and
MPI may not ever completely free the resources associated with it
(because you didn't call a test or wait on it).

So you don't have to wait for all pending non-blocking actions before
you invoke your next non-blocking communication; you just have to have
distinct requests for each pending action.

> Do I have to do something else on the blocking Send function? And is
> there a function in MPI to clean up its buffers in a running
> application without using the Finalize function.

No. All MPI applications must call MPI_Finalize.

Open MPI caches some of its internal data structures via freelists
such that if you do a non-blocking send and then test/wait to complete
it, we don't necessarily free everything -- we put it on a list that
so we can reuse it for your *next* non-blocking communication. Make
sense?

-- 
Jeff Squyres
Cisco Systems