Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Memory leak in my code
From: Mark Allan (Mark_Allan_1978_at_[hidden])
Date: 2009-02-26 13:21:29


Eugene / Dorian,

Thanks for the advice. I didn't appreciate that it was necessary to
explicitly complete the first send by an MPI call. I assumed that when the
second receive was complete the first send must also have been completed and
all would be ok. In any case, I'm now using MPI_Probe to eliminate the
first send anyway (thanks Eugene).

I pass the recvData pointer by reference to nonBlockingRecv as I wish to
control the memory deletion elsewhere. I think this achieves the same as
Dorian points out below.

Thanks,

Mark.

-----Original Message-----
From: doriankrause_at_[hidden] [mailto:doriankrause_at_[hidden]]
Sent: 26 February 2009 17:40
To: mark_allan_1978_at_[hidden]; Open MPI Users
Subject: Re: [OMPI users] Memory leak in my code

Mark Allan wrote:
> Dear all,
>
> With this simple code I find I am getting a memory leak when I run on 2
processors. Can anyone advise why?
>

I suspect the prototype of nonBlockingRecv is actually

MPI_Request nonBlockingRecv(int **t, int &size, const int tag, const int
senderRank)

and in this case you need to use

(*t) = malloc(...)

inside the function.

Additionally you should pass the recvData pointer by reference, i.e.

MPI_Request r = nonBlockingRecv(&recvData,size,rank,0);

It strange though, that the free(recvData) did not fail...

Dorian

> I'm fairly new to MPI (have only done very simple things in the past).
I'm trying to do a non-blocking send/recv (from any proc to any proc) but
the receiving processor doesn't know how much data it is going to be sent,
hence the the blocking recv of the size in order to allocate the buffer. Is
there a better way of doing this?
>
> Thanks,
>
> Mark
>
> #include <mpi.h>
>
> MPI_Request
> nonBlockingSend(int *t, int size, const int tag, const int
destinationRank)
> {
> MPI_Request request1;
> MPI_Isend(&size,1,MPI_INT,destinationRank,0,MPI_COMM_WORLD,&request1);
> MPI_Request request;
> MPI_Isend(t,size,MPI_INT,destinationRank,tag,MPI_COMM_WORLD,&request);
> return request;
> }
>
> MPI_Request
> nonBlockingRecv(int *&t, int &size, const int tag, const int senderRank)
> {
> MPI_Status s1;
> MPI_Recv(&size,1,MPI_INT,senderRank,0,MPI_COMM_WORLD,&s1);
> t = (int *) malloc(size*sizeof(int));
> MPI_Request request;
> MPI_Irecv(t,size,MPI_INT,senderRank,tag,MPI_COMM_WORLD,&request);
> return request;
> }
>
> void
> communicationComplete(MPI_Request &r)
> {
> MPI_Status status;
> MPI_Wait(&r,&status);
> }
>
> void
> barrier()
> {
> MPI_Barrier(MPI_COMM_WORLD);
> }
>
> int main(int argc, char *argv[])
> {
> MPI_Init(&argc,&argv);
>
> int numProcs,rank;
> MPI_Comm_size(MPI_COMM_WORLD,&numProcs);
> MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>
> int numIts = 10000000;
> int bufSize = 10;
>
> // Setup send buffers
> int *sendData = (int *) malloc(bufSize*sizeof(int));
> for(int i=0;i<bufSize;i++)
> sendData[i] = i;
>
> // Perform send and recvs
> for(int i=0;i<numIts;i++)
> {
> if(rank==0)
> {
> for(int proc = 1; proc<numProcs;proc++)
> {
> MPI_Request r = nonBlockingSend(sendData,bufSize,proc,proc);
> communicationComplete(r);
> }
> }
> else
> {
> int *recvData;
> int size;
> MPI_Request r = nonBlockingRecv(recvData,size,rank,0);
> communicationComplete(r);
> free(recvData);
> }
> barrier();
> }
> free(sendData);
>
> MPI_Finalize();
>
> return 1;
> }
>
>
>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

        
        
                
___________________________________________________________
All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine
http://uk.docs.yahoo.com/nowyoucan.html