Hi Derek
Typically in the domain decomposition codes we have here
(atmosphere, oceans, climate)
there is an overlap across the boundaries of subdomains.
Unless your computation is so "embarrassingly parallel" that
each process can operate from start to end totally independent from
the others, you should expect such an overlap,
but you didn't tell what computation you want to do.
The width of the overlap depends on the computation being done.
For instance, in a twopoint stencil finite difference PDE solver
the overlap may have width one, but for broader FD stencils you
will need broader overlaps.
The redundant calculations of overlap points on neighbor subdomains
in general cannot be avoided.
Exchanging the overlap data across neighbor subdomain processes
cannot be avoided either.
However, **full overlap slices** are exchanged after each computational
step (in our case here a time step).
It is not a pointbypoint exchange as you suggested.
Overlap exchange does limit the usefulness/efficiency
of using too many subdomains (e.g. if your overlaptousefuldata
ratio gets close to 100%).
However, is not as detrimental as you imagined based on your
pointbypoint exchange conjecture.
If your domain is 100x100x100 and you split in subdomain slices
across 5 processes, with a 1point overlap (on each side)
you will have a 2x5/100 = 10% waste due to overlap calculations
(plus the MPI communication cost/time),
but your problem is still being solved in (almost) 1/5 of the time
it would take in serial mode.
Since your array seems to fit nicely in Cartesian coordinates,
you could use the MPI functions that create and explore
the Cartesian domain topology.
For details, see Chapter 6, section 6.5 of "MPI, The complete Reference,
Volume 1, The MPI Core, 2nd. Ed.,
by M. Snir, S. Otto, S. HussLederman, D. Walker, and J. Dongarra,
MIT Press, 1998."
Also, this tutorial from Indiana University solves the 2D diffusion
equation (first serial, then parallel with MPI) and may help.
Unfortunately they don't use the MPI Cartesian functions, though:
http://rc.uits.iu.edu/hpa/mpi_tutorial/s2_diffusion_math_limited.html
I believe there are other examples in the web,
check the LLNL site:
https://computing.llnl.gov/tutorials/mpi/
The book
"Parallel Programming with MPI, by Peter Pacheco,
Morgan Kauffman, 1997" has worked out examples also.
An abridged version is available here
http://www.cs.usfca.edu/~peter/ppmpi/
I hope this helps,
Gus Correa

Gustavo Correa
LamontDoherty Earth Observatory  Columbia University
Palisades, NY, 109648000  USA

Cole, Derek E wrote:
> Hi all. I am relatively new to MPI, and so this may be covered somewhere
> else, but I can’t seem to find any links to tutorials mentioning any
> specifics, so perhaps someone here can help.
>
>
>
> In C, I have a 3D array that I have dynamically allocated and access
> like Array[x][y][z]. I was hoping to calculate a subsection for each
> processor to work on, of size nx in the x dimension, ny in the y
> dimension, and the full Z dimension. Starting at Array[sx][sy][0] and
> going to Array[ex][ey][z] where eysy=ny.
>
>
>
> What is the best way to do this? I am able to calculate the neighboring
> processors and assign a subsection of the XY dimensions to each
> processor, however I am having problems with sharing the border
> information of the arrays with the other processors. I don’t really want
> to have to do a MPI_Send for each of the 0..Z slices’s border
> information. I’d kind of like to process all of the Z, then share the
> full “face” of the border information with the neighbor processor. For
> example, if process 1 was the right neighbor of process zero, I’d want
> process zero to send Subarray[0..nx][ny][0..Z](the right most face) to
> processor 1’s leftmost face..assuming the XY plane was your screen,
> and the Z dimension extended into the screen.
>
>
>
> If anyone has any information that talks about how to use the MPI data
> types, or some other method, or wants to talk about how this might be
> done, I’m all ears.
>
>
>
> I know it is hard to talk about without pictures, so if you all like, I
> can post a picture explaning what I want to do. Thanks!
>
>
>
> Derek
>
>
>
>
> 
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.openmpi.org/mailman/listinfo.cgi/users
