i need help regarding a mpi program which solves the 2d heat equation. I have rewritten the original code in my own way,

in oder to understand well the parallelization.

I will initially make it work with 4 processors (nproc=4) with a 2D domain of 100 points, that is to say 10 on the x axis (size_x=10)

and 10 on the y axis (size_y=10). The 2D is divided into four (x_domains=2 and y_domains=2).

The array of values ( "x0 in the code ) has a dimension of 8 * 8, so each processor works on 4 * 4 arrays.

The current process rank is stored in "me". I calculate the coordinates of the domain interval for each worker :

xdeb(me)<x<xfin(me) and ydeb(me)<y<yfin(me) for "me" process. I check it and it's correct.

The initial conditions like x_domains,y_domains,size_x, size_y, the max step, the tolerance, are stored in param file.

There are a total of three main files : the main program explicitPar.f90 which does initialization, calls in the main loop : the explitUtil solving routine in explUtil.f90 and updates the neighbors of the current process with updateBound routine in updateBound.f90 .

Everything seems ok except the "updateBound" routine : I have a problem with the indexes of row and columns in the communication between

North Neighbors, South Neighbors, West and East Neighbors.

For example, I have :

------------------------------------------------------------------------------------------------------------

! Send my boundary to North and receive from South

CALL MPI_SENDRECV(x(ydeb(me),xdeb(me)), 1, row_type ,neighBor(N),flag, &

x(yfin(me),xdeb(me)), 1, row_type,neighbor(S),flag, &

comm2d, status, infompi)

------------------------------------------------------------------------------------------------------------

For 4 processors and me=0, i have :

xdeb(0)=1

xfin(0)=4

ydeb(0)=5

yfin(0 )=8

So I send to the North neighbor the upper row indexed by x(ydeb(me),xdeb(me)). But I should have ghost cells for

the communication in this case for calculting the next values of "x0" array for each worker. Actually I need the

boundaries values for each worker ( with 4*4 size) but I think I have to work on a 5*5 size for this calculation in order to

have ghost cells on the edges of 4*4 worker cells.

You can compile this code by adapting the makefile with :" $ make explicitPar "

and execute it in my case with : "$ mpirun -n 4 explicitPar "

Anyone could see what's wrong with my code ? Have I got to put a 12*12 size for the global domain ? it would allow to have 5*5 worker cells with 2 more for the boundary condition (constant), so a total equal to 12 for size_x and size_y ( 8+2+2).

Note that the edges of the domain remain equal to 10 as it's expected.

Any help would be really appreciated.

Thanks in advance.