Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Fortran derived types
From: Paul Kapinos (kapinos_at_[hidden])
Date: 2010-05-06 06:07:34


> In general, even in your serial fortran code, you're already
> taking a performance hit using a derived type.

That is not generally true. The right statement is: "it depends".

Yes, sometimes derived data types and object orientation and so on can
lead to some performance hit; but current compiler usually can oprimise

E.g. consider
(especially p.19).

So, I would not recommend to disturb the ready program in order to let
it be the old good f77 style. And let us not start a flame about
"assembler is faster but OO is easier"! :-)

Best wishes

> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On Behalf Of Prentice Bisbal
> Sent: Wednesday, May 05, 2010 11:51 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Fortran derived types
> Vedran Coralic wrote:
>> Hello,
>> In my Fortran 90 code I use several custom defined derived types.
>> Amongst them is a vector of arrays, i.e. v(:)%f(:,:,:). I am wondering
>> what the proper way of sending this data structure from one processor
>> to another is. Is the best way to just restructure the data by copying
>> it into a vector and sending that or is there a simpler way possible
>> by defining an MPI derived type that can handle it?
>> I looked into the latter myself but so far, I have only found the
>> solution for a scalar fortran derived type and the methodology that
>> was suggested in that case did not seem naturally extensible to the vector case.
> It depends on how your data is distributed in memory. If the arrays are evenly distributed, like what would happen in a multidimensional-array, the derived datatypes will work fine. If you can't guarantee the spacing between the arrays that make up the vector, then using MPI_Pack/MPI_Unpack (or whatever the Fortran equivalents are) is the best way to go.
> I'm not an expert MPI programmer, but I wrote a small program earlier this year that created a dynamically created array of dynamically created arrays. After doing some research into this same problem, it looked like packing/unpacking was the only way to go.
> Using Pack/Unpack is easy, but there is a performance hit since the data needs to be copied into the packed buffer before sending, and then copied out of the buffer after the receive.
> --
> Prentice
> _______________________________________________
> users mailing list
> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]

Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915