Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] memory leak in alltoallw
From: Dave Grote (dpgrote_at_[hidden])
Date: 2008-08-06 19:21:32


Hi,
  I've been enhancing my code and have started using the nice routine
alltoallw. The code works fine except that there seems to be a memory
leak in alltoallw. I've eliminated all other possible causes and have
reduced the code down to a bare minimum. I've included fortran source
code which produces the problem. This code just keeps calling alltoallw,
but with all of the send and receive counts set to zero, so it shouldn't
be doing anything. And yet I can watch the memory continue to grow. As a
sanity check, I change the code to call alltoallv instead, and there is
no memory leak. If it helps, I am using OpenMPI on an AMD system running
Chaos linux. I tried the latest nightly build of version 1.3 from Aug 5.
I run four processors on one quad core node so it should be using shared
memory communication.
   Thanks!
      Dave

      program testalltoallw
      real(kind=8):: phi(-3:3200+3)
      real(kind=8):: phi2(-3:3200+3)
      integer(4):: izproc,ii
      integer(4):: nzprocs
      integer(4):: zrecvtypes(0:3),zsendtypes(0:3)
      integer(4):: zsendcounts(0:3),zrecvcounts(0:3)
      integer(4):: zdispls(0:3)
      integer(4):: mpierror
      include "mpif.h"
      phi = 0.

      call MPI_INIT(mpierror)
      call MPI_COMM_SIZE(MPI_COMM_WORLD,nzprocs,mpierror)
      call MPI_COMM_RANK(MPI_COMM_WORLD,izproc,mpierror)

      zsendcounts=0
      zrecvcounts=0
      zdispls=0
      zdispls=0
      zsendtypes=MPI_DOUBLE_PRECISION
      zrecvtypes=MPI_DOUBLE_PRECISION

      do ii=1,1000000000
        if (mod(ii,1000000_4) == 0) print*,"loop ",ii,izproc

        call MPI_ALLTOALLW(phi,zsendcounts,zdispls,zsendtypes,
     & phi2,zrecvcounts,zdispls,zrecvtypes,
     & MPI_COMM_WORLD,mpierror)

      enddo
      return
      end