Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] programming qsn??
From: amjad ali (amjad11_at_[hidden])
Date: 2009-08-13 15:21:25


Hi, all,

I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid
(all triangles) is partitioned among 8 processes using METIS. Each process
has different number of neighboring processes. Suppose each process has n
elements/faces whose data it needs to sends to corresponding neighboring
processes, and it has m number of elements/faces on which it needs to get
data from corresponding neighboring processes. Values of n and m are
different for each process. Another aim is to hide the communication behind
computation. For this I do the following for each process:

DO j = 1 to n

CALL MPI_ISEND (send_data, num, type, dest(j), tag, MPI_COMM_WORLD, ireq(j),
ierr)

ENDDO

DO k = 1 to m

CALL MPI_RECV(recv_data, num, type, source(k), tag, MPI_COMM_WORLD, status,
ierr)

ENDDO

 This solves my problem. But it gives memory leakage; Ram gets filled after
few thousands of iteration. What is the solution/remedy? How should I tackle
this?

In another CFD code I removed this problem of memory-filling by following
(in that code n=m) :

DO j = 1 to n

CALL MPI_ISEND (send_data, num, type, dest(j), tag, MPI_COMM_WORLD, ireq(j),
ierr)

ENDDO

CALL MPI_WAITALL(n,ireq,status,ierr)

DO k = 1 to n

CALL MPI_RECV(recv_data, num, type, source(k), tag, MPI_COMM_WORLD, status,
ierr)

ENDDO

But this is not working in current code; and the previous code was not
giving correct results with large number of processes.

Please suggest solution.

THANKS A LOT FOR YOUR KIND ATTENTION.

With best regards,

Amjad Ali.