Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] possible bug exercised by mpi4py
From: TERRY DONTJE (terry.dontje_at_[hidden])
Date: 2012-05-25 11:23:58


BTW, the changes prior to r26496 failed some of the MTT test runs on
several systems. So if the current implementation is deemed not
"correct" I suspect we will need to figure out if there are changes to
the tests that need to be done.

See http://www.open-mpi.org/mtt/index.php?do_redir=2066 for some of the
failures I think are due to r26495 reduce scatter changes.

--td

On 5/25/2012 12:27 AM, George Bosilca wrote:
> On May 24, 2012, at 23:48 , Dave Goodell wrote:
>
>> On May 24, 2012, at 10:34 PM CDT, George Bosilca wrote:
>>
>>> On May 24, 2012, at 23:18, Dave Goodell<goodell_at_[hidden]> wrote:
>>>
>>>> So I take back my prior "right". Upon further inspection of the text and the MPICH2 code I believe it to be true that the number of the elements in the recvcounts array must be equal to the size of the LOCAL group.
>>> This is quite illogical, but it will not be the first time the standard is lacking some. So, if I understand you correctly, in the case of an intercommunicator a process doesn't know how much data it has to reduce, at least not until it receives the array of recvcounts from the remote group. Weird!
>> No, it knows because of the restriction that $sum_i^n{recvcounts[i]}$ yields the same sum in each group.
> I should have read the entire paragraph of the standard … including the rationale. Indeed, the rationale describes exactly what you mentioned.
>
> Apparently the figure 12 on the following [MPI Forum blessed] link is supposed to clarify any potential misunderstanding regarding the reduce_scatter. Count how many elements are on each side of the intercommunicator ;)
>
> george.
>
>> The way it's implemented in MPICH2, and the way that makes this make a lot more sense to me, is that you first do intercommunicator reductions to temporary buffers on rank 0 in each group. Then rank 0 scatters within the local group. The way I had been thinking about it was to do a local reduction followed by an intercomm scatter, but that isn't what the standard is saying, AFAICS.
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Terry D. Dontje | Principal Software Engineer
Developer Tools Engineering | +1.781.442.2631
Oracle *- Performance Technologies*
95 Network Drive, Burlington, MA 01803
Email terry.dontje_at_[hidden] <mailto:terry.dontje_at_[hidden]>