Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Graham E Fagg (fagg_at_[hidden])
Date: 2006-03-06 16:44:25


Hi David
  yep they do (reduce the values to a single location) and in a tree
topology it would look something like the following:

proc 3 4 5 6
local values 30 40 50 60
partial sums - - - -

proc 1 2
local values 10 20
partial sums 10+30+40 (80) 20+50+60 (130)

proc 0
local values 0
partial sums 0+80+130 = 210

result at root (0) 210

For in place the value (0) at the root would be in its Send buffer

The MPI_IN_PLACE option is more important for allreduce as it saves lots
of potential local data movement.

I suggest that you look on the web for a MPI primer or tutorial to gain
more understanding.

G.

On Mon, 6 Mar 2006, Xiaoning (David) Yang wrote:

> I'm not quite sure how collective computation calls work. For example, for
> an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the
> processes and calculate the sum and put result in recvbuf on root? Sounds
> strange.
>
> David
>
> ***** Correspondence *****
>
>
>
>> From: Jeff Squyres <jsquyres_at_[hidden]>
>> Reply-To: Open MPI Users <users_at_[hidden]>
>> Date: Mon, 6 Mar 2006 13:22:23 -0500
>> To: Open MPI Users <users_at_[hidden]>
>> Subject: Re: [OMPI users] MPI_IN_PLACE
>>
>> Generally, yes. There are some corner cases where we have to
>> allocate additional buffers, but that's the main/easiest benefit to
>> describe. :-)
>>
>>
>> On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote:
>>
>>> Jeff,
>>>
>>> Thank you for the reply. In other words, MPI_IN_PLACE only
>>> eliminates data
>>> movement on root, right?
>>>
>>> David
>>>
>>> ***** Correspondence *****
>>>
>>>
>>>
>>>> From: Jeff Squyres <jsquyres_at_[hidden]>
>>>> Reply-To: Open MPI Users <users_at_[hidden]>
>>>> Date: Fri, 3 Mar 2006 19:18:52 -0500
>>>> To: Open MPI Users <users_at_[hidden]>
>>>> Subject: Re: [OMPI users] MPI_IN_PLACE
>>>>
>>>> On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
>>>>
>>>>> call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
>>>>> & MPI_COMM_WORLD,ierr)
>>>>>
>>>>> Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
>>>>> Thanks for any help!
>>>>
>>>> MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
>>>> standard. Its use in MPI_REDUCE is defined in section 7.3.3:
>>>>
>>>> http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150
>>>>
>>>> It says:
>>>>
>>>> "The ``in place'' option for intracommunicators is specified by
>>>> passing the value MPI_IN_PLACE to the argument sendbuf at the root.
>>>> In such case, the input data is taken at the root from the receive
>>>> buffer, where it will be replaced by the output data."
>>>>
>>>> In the simple pi example program, it doesn't make much sense to use
>>>> MPI_IN_PLACE except as an example to see how it is used (i.e., it
>>>> won't gain much in terms of efficiency because you're only dealing
>>>> with a single MPI_DOUBLE_PRECISION). But you would want to put an
>>>> "if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
>>>> the first argument, and mypi as the second argument for the root.
>>>> For all other processes, use the same MPI_REDUCE that you're using
>>>> now.
>>>>
>>>> --
>>>> {+} Jeff Squyres
>>>> {+} The Open MPI Project
>>>> {+} http://www.open-mpi.org/
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> users_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> --
>> {+} Jeff Squyres
>> {+} The Open MPI Project
>> {+} http://www.open-mpi.org/
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Thanks,
         Graham.
----------------------------------------------------------------------
Dr Graham E. Fagg | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: fagg_at_[hidden] | Phone:+1(865)974-5790 | Fax:+1(865)974-8296
Broken complex systems are always derived from working simple systems
----------------------------------------------------------------------