Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Partitioning problem set data
From: Richard Treumann (treumann_at_[hidden])
Date: 2010-07-21 12:08:57


The MPI Standard (in my opinion) should have avoided the word "buffer". To
me, a "buffer" is something you set aside as scratch space between the
application data structures and the communication calls.

In MPI, the communication is done directly from/to the application's data
structures and there are no "buffers" needed. The point of MPI_Datatypes
is their ability to describe the layout of the data in the application
data structure so an MPI_Scatter(), for example, can operate directly.

This saves any need to allocate contiguous scratch buffers and do packing
to send and unpacking after receive.


Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363




From:
Alexandru Blidaru <alexsb92_at_[hidden]>
To:
Open MPI Users <users_at_[hidden]>
Date:
07/21/2010 11:19 AM
Subject:
Re: [OMPI users] Partitioning problem set data
Sent by:
users-bounces_at_[hidden]



Hey Bill,

I took a look at the documentation for MPI_Scatter(), but I noticed that
you need buffers to use it. My supervisor wasn't really happy with using
buffers, and for that reason the code that I am writing is only using
blocking routines, which will make my life a tad bit harder due to the
fact that i have to avoid a deadlock, i believe it's called. I know it
might not make sense due to the way MPI works, but is there any
Scatter-like function that does not use buffers?

NB: I haven't looked through that book yet, so i am not sure whether they
provide any non-buffer examples.

Alex

On Wed, Jul 21, 2010 at 10:48 AM, Bill Rankin <Bill.Rankin_at_[hidden]> wrote:
Depending on the datatype and its order in memory, the “Block,*” and
“*,Block” (which we used to call “slabs” in 3D) may be implemented by a
simple scatter/gather call in MPI. The “Block,Block” distribution is a
little more complex, but if you take advantage of MPI’s derived datatypes,
you may be able to reference an arbitrary 3D sub-space as a single data
entity and then use gather/scatter with that.
 
I recommend that look through some of the examples in “MPI – The Complete
Reference (Vol. 1)” by Snir, et.al. for use of MPI_Gather(),
MPI_Scatter(), as well as the section on user-defined datatypes. Section
5.2 of “Using MPI” by Gropp, Lusk and Skjellum has an example code for an
N-Body Problem which you may find useful.
 
Hope this helps.
 
-bill
 
 
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
Behalf Of Alexandru Blidaru
Sent: Tuesday, July 20, 2010 10:54 AM
To: Open MPI Users
Subject: Re: [OMPI users] Partitioning problem set data
 
If there is an already existing implementation of the *Block or Block*
methods that splits the array and sends the individual pieces to
the proper nodes, can you point me to it please?
On Tue, Jul 20, 2010 at 9:52 AM, Alexandru Blidaru <alexsb92_at_[hidden]>
wrote:
Hi,
 
I have a 3D array, which I need to split into equal n parts, so that each
part would run on a different node. I found the picture in the attachment
from this website (
https://computing.llnl.gov/tutorials/parallel_comp/#DesignPartitioning) on
the different ways to partition data. I am interested in the block
methods, as the cyclic methods wouldn't really work for me at all.
Obviously the *, BLOCK and the BLOCK, * methods would be really easy to
implement for 3D arrays, assuming that the 2D picture would be looking at
the array from the top. My question is if there are other better ways to
do it from a performance standpoint?
 
Thanks for your replies,
Alex
 

_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
users_at_[hidden]
http://www.open-mpi.org/mailman/listinfo.cgi/users