Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OMPI_LIST_GROW keeps allocating memory
From: Max Staufer (max.staufer_at_[hidden])
Date: 2013-09-09 03:59:01


I am still working on a small example that shows the problem,
our problematic call is part of a fairly extensive framework so its not
easy to post that part, but see below.

As you can see the subroutine is recursive and will be calling itself
again depending on the outcome posted here.
The MPI_ALLREDUCE of dum(3) is the part that causes the ompi_free_list
to grow.

Is there an MCA parameter to limit the groth of the ompi_free_list ?

Max

-----------
RECURSIVE SUBROUTINE setup(l,n,listrank)
!
!
     USE dagmgpar_mem
     IMPLICIT NONE
     INTEGER :: l,n
     INTEGER, OPTIONAL :: listrank(n+1:*)
     INTEGER :: nc,ierr,i,j,k,nz
     LOGICAL :: slcoarse
     INTEGER, POINTER, DIMENSION(:) :: jap
     REAL(kind(0.0d0)), POINTER, DIMENSION(:) :: ap
     LOGICAL, SAVE :: slowcoarse
     REAL(kind(0.0d0)) :: fw,eta,dum(3),dumsend(3)
#ifdef WITHOUTINPLACE
     REAL(kind(0.0d0)) :: dumbuffer(3)
#endif
     CHARACTER(len=13) :: prtint
     REAL (kind(0.0d0)) :: fff(1)
     !
     nn(l)=n
     nlc(1)=n
     IF (n > 0) THEN
        nlc(2)=dt(l)%ia(n+1)-dt(l)%ia(1)
     ELSE
        nlc(2)=0
     END IF
     ngl=nlc
     IF (l==2) slowcoarse=.FALSE.
     slcoarse = 2*nlcp(1) < 3*nlc(1) .AND. 2*nlcp(2) < 3*nlc(2)
     IF( l == nstep+1 .OR. l == maxlev &
          .OR. ( ngl(1) <= maxcoarset) &
          .OR. ( nglp(1) < 2*ngl(1) .AND. nglp(2) < 2*ngl(2) &
                             .AND. ngl(1) <= maxcoarseslowt ) &
          .OR. ( slowcoarse .AND. slcoarse ) &
          .OR. nglp(1) == ngl(1) ) THEN
          nlev=l
          dumsend(3)=-1.0d0
     ELSE
          dumsend(3)=dble(NPROC)
     END IF
     dumsend(1:2)=dble(nlc)
#ifdef WITHOUTINPLACE
     dumbuffer = dum
     CALL MPI_ALLREDUCE(dumbuffer,dum,3,MPI_DOUBLE_PRECISION, &
          MPI_SUM,ICOMM,ierr)
#else
     CALL MPI_ALLREDUCE(dumsend,dum,3,MPI_DOUBLE_PRECISION, &
          MPI_SUM,ICOMM,ierr)
#endif
     ngl=dum(1:2)
     IF (dum(3) .LE. 0.0d0) nlev=l
     slowcoarse=slcoarse

...
> Yes, the number of elements each freelist accepts to allocate can be bounded. However, we need to know which freelist we should act upon.
>
> What exactly you means by "MPI_ALLREDUCE is called in a recursive way"? You mean inside a loop right?
>
> George.
>
>
> On Sep 8, 2013, at 21:36 , Max Staufer <max.staufer_at_[hidden]> wrote:
>
>> I will post a small example for testing.
>>
>> It is interesting to note though that this happens only
>>
>> when MPI_ALLREDUCE is called in a recursive kind of way.
>>
>> Is there a possibility to limit the OMPI_free_list groth, via an --mca parameter ?
>>
>>
>>
>>
>>
>>
>>
>> Date: Sun, 08 Sep 2013 14:51:44 +0200
>> From: Max Staufer <max.staufer_at_[hidden]>
>> To: users_at_[hidden]
>> Subject: [OMPI users] OMPI_LIST_GROW keeps allocating memory
>> Message-ID: <522C72E0.9000301_at_[hidden]>
>> Content-Type: text/plain; charset=ISO-8859-15
>>
>> Hi All,
>>
>> using ompi 1.4.5 or 1.6.5 for that matter, I came across an
>> interesting thing
>>
>> when an MPI function is called from in a recusivly called subroutine
>> (Fortran Interface)
>> the MPI_ALLREDUCE function allocates memory in the OMPI_LIST_GROW functions.
>>
>> It does this indefinitly. In our case OMPI allocated 100GB.
>>
>> is there a method to limit this behaviour ?
>>
>> thanks
>>
>> Max
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users