Assume your data is discontiguous in memory and making it contiguous is not practical (e.g. there is no way to make cells of a row and cells of a column both contiguous.) You have 3 options:
1) Use many small/contiguous messages
2) Allocate scratch space and pack/unpack
3) Use a derived datatype.
If you decide to use option 2 then the time your program spends in the allocate/pack/send/free and the time it spends in allocate/recv/unpack/free needs to be counted in the cost. Just comparing a contiguous vs discontiguous message time does not help make a good decision.
Whether 2 or 3 is faster depends a lot in how the MPI implementation does its datatype processing. If the MPI implementation can move data directly from discontiguous memory into the sends side adapter and from recv side adapter to discontiguous memory, Datatypes may be faster and will conserve memory. If the MPI implementation just mallocs a scratch buffer and uses the datatype to guide an internal pack/unpack subroutine, there is a pretty good chance your hand crafted pack or unpack, along with contiguous messaging will be more efficient.
I mention option 1 for completeness and because if there were a very good put/get available, it might even be the best choice. It is probably not the best choice in any current MPI but there may be exceptions.
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
Terry Frankcombe ---05/06/2010 12:25:41 AM---Hi Derek On Wed, 2010-05-05 at 13:05 -0400, Cole, Derek E wrote:
Terry Frankcombe <email@example.com>
Open MPI Users <firstname.lastname@example.org>
05/06/2010 12:25 AM
Re: [OMPI users] Fortran derived types