Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] memory leak in alltoallw
From: George Bosilca (bosilca_at_[hidden])
Date: 2008-08-17 18:10:14


Dave,

Thanks for your report. As you discovered we had a memory leak in the
MPI_Alltoallw. A very small one, but it was there. Basically, we
didn't release two internal arrays of data-types, used to convert from
the Fortran data-types (as supplied by the user) to their C version
(as required by the implementation of the alltoallw function).

The good news is that this should not a problem anymore. Commit 19314
fix this for the trunk, while commit 19315 fix it for the upcoming 1.3.

   Thanks again for your report.
     george.

On Aug 7, 2008, at 1:21 AM, Dave Grote wrote:

>
> Hi,
> I've been enhancing my code and have started using the nice routine
> alltoallw. The code works fine except that there seems to be a
> memory leak in alltoallw. I've eliminated all other possible causes
> and have reduced the code down to a bare minimum. I've included
> fortran source code which produces the problem. This code just keeps
> calling alltoallw, but with all of the send and receive counts set
> to zero, so it shouldn't be doing anything. And yet I can watch the
> memory continue to grow. As a sanity check, I change the code to
> call alltoallv instead, and there is no memory leak. If it helps, I
> am using OpenMPI on an AMD system running Chaos linux. I tried the
> latest nightly build of version 1.3 from Aug 5. I run four
> processors on one quad core node so it should be using shared memory
> communication.
> Thanks!
> Dave
>
> program testalltoallw
> real(kind=8):: phi(-3:3200+3)
> real(kind=8):: phi2(-3:3200+3)
> integer(4):: izproc,ii
> integer(4):: nzprocs
> integer(4):: zrecvtypes(0:3),zsendtypes(0:3)
> integer(4):: zsendcounts(0:3),zrecvcounts(0:3)
> integer(4):: zdispls(0:3)
> integer(4):: mpierror
> include "mpif.h"
> phi = 0.
>
> call MPI_INIT(mpierror)
> call MPI_COMM_SIZE(MPI_COMM_WORLD,nzprocs,mpierror)
> call MPI_COMM_RANK(MPI_COMM_WORLD,izproc,mpierror)
>
> zsendcounts=0
> zrecvcounts=0
> zdispls=0
> zdispls=0
> zsendtypes=MPI_DOUBLE_PRECISION
> zrecvtypes=MPI_DOUBLE_PRECISION
>
> do ii=1,1000000000
> if (mod(ii,1000000_4) == 0) print*,"loop ",ii,izproc
>
> call MPI_ALLTOALLW(phi,zsendcounts,zdispls,zsendtypes,
> & phi2,zrecvcounts,zdispls,zrecvtypes,
> & MPI_COMM_WORLD,mpierror)
>
> enddo
> return
> end
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users



  • application/pkcs7-signature attachment: smime.p7s