Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Fortran derived types
From: Cole, Derek E (derek.e.cole_at_[hidden])
Date: 2010-05-05 13:05:51


In general, even in your serial fortran code, you're already taking a performance hit using a derived type. Is it really necessary? It might be easier for you to change your fortran code into more memory friendly structures and then the MPI part will be easier. The serial code will have the added benefit of running faster, too.

Derek

-----Original Message-----
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On Behalf Of Prentice Bisbal
Sent: Wednesday, May 05, 2010 11:51 AM
To: Open MPI Users
Subject: Re: [OMPI users] Fortran derived types

Vedran Coralic wrote:
> Hello,
>
> In my Fortran 90 code I use several custom defined derived types.
> Amongst them is a vector of arrays, i.e. v(:)%f(:,:,:). I am wondering
> what the proper way of sending this data structure from one processor
> to another is. Is the best way to just restructure the data by copying
> it into a vector and sending that or is there a simpler way possible
> by defining an MPI derived type that can handle it?
>
> I looked into the latter myself but so far, I have only found the
> solution for a scalar fortran derived type and the methodology that
> was suggested in that case did not seem naturally extensible to the vector case.
>

It depends on how your data is distributed in memory. If the arrays are evenly distributed, like what would happen in a multidimensional-array, the derived datatypes will work fine. If you can't guarantee the spacing between the arrays that make up the vector, then using MPI_Pack/MPI_Unpack (or whatever the Fortran equivalents are) is the best way to go.

I'm not an expert MPI programmer, but I wrote a small program earlier this year that created a dynamically created array of dynamically created arrays. After doing some research into this same problem, it looked like packing/unpacking was the only way to go.

Using Pack/Unpack is easy, but there is a performance hit since the data needs to be copied into the packed buffer before sending, and then copied out of the buffer after the receive.

--
Prentice
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users