Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] memory leak in alltoallw
From: Dave Grote (dpgrote_at_[hidden])
Date: 2008-08-18 16:47:49

Great! Thanks for the fix.

Tim Mattox wrote:
The fix for this bug is in the 1.2 branch as of r19360, and will be in the
upcoming 1.2.7 release.

On Sun, Aug 17, 2008 at 6:10 PM, George Bosilca <> wrote:

Thanks for your report. As you discovered we had a memory leak in the
MPI_Alltoallw. A very small one, but it was there. Basically, we didn't
release two internal arrays of data-types, used to convert from the Fortran
data-types (as supplied by the user) to their C version (as required by the
implementation of the alltoallw function).

The good news is that this should not a problem anymore. Commit 19314 fix
this for the trunk, while commit 19315 fix it for the upcoming 1.3.

 Thanks again for your report.

On Aug 7, 2008, at 1:21 AM, Dave Grote wrote:

I've been enhancing my code and have started using the nice routine
alltoallw. The code works fine except that there seems to be a memory leak
in alltoallw. I've eliminated all other possible causes and have reduced the
code down to a bare minimum. I've included fortran source code which
produces the problem. This code just keeps calling alltoallw, but with all
of the send and receive counts set to zero, so it shouldn't be doing
anything. And yet I can watch the memory continue to grow. As a sanity
check, I change the code to call alltoallv instead, and there is no memory
leak. If it helps, I am using OpenMPI on an AMD system running Chaos linux.
I tried the latest nightly build of version 1.3 from Aug 5. I run four
processors on one quad core node so it should be using shared memory

   program testalltoallw
   real(kind=8):: phi(-3:3200+3)
   real(kind=8):: phi2(-3:3200+3)
   integer(4):: izproc,ii
   integer(4):: nzprocs
   integer(4):: zrecvtypes(0:3),zsendtypes(0:3)
   integer(4):: zsendcounts(0:3),zrecvcounts(0:3)
   integer(4):: zdispls(0:3)
   integer(4):: mpierror
   include "mpif.h"
   phi = 0.

   call MPI_INIT(mpierror)
   call MPI_COMM_SIZE(MPI_COMM_WORLD,nzprocs,mpierror)
   call MPI_COMM_RANK(MPI_COMM_WORLD,izproc,mpierror)


   do ii=1,1000000000
     if (mod(ii,1000000_4) == 0) print*,"loop ",ii,izproc

     call MPI_ALLTOALLW(phi,zsendcounts,zdispls,zsendtypes,
  &                     phi2,zrecvcounts,zdispls,zrecvtypes,
  &                     MPI_COMM_WORLD,mpierror)


users mailing list
users mailing list