Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] possible bug exercised by mpi4py
From: George Bosilca (bosilca_at_[hidden])
Date: 2012-05-25 00:27:22


On May 24, 2012, at 23:48 , Dave Goodell wrote:

> On May 24, 2012, at 10:34 PM CDT, George Bosilca wrote:
>
>> On May 24, 2012, at 23:18, Dave Goodell <goodell_at_[hidden]> wrote:
>>
>>> So I take back my prior "right". Upon further inspection of the text and the MPICH2 code I believe it to be true that the number of the elements in the recvcounts array must be equal to the size of the LOCAL group.
>>
>> This is quite illogical, but it will not be the first time the standard is lacking some. So, if I understand you correctly, in the case of an intercommunicator a process doesn't know how much data it has to reduce, at least not until it receives the array of recvcounts from the remote group. Weird!
>
> No, it knows because of the restriction that $sum_i^n{recvcounts[i]}$ yields the same sum in each group.

I should have read the entire paragraph of the standard … including the rationale. Indeed, the rationale describes exactly what you mentioned.

Apparently the figure 12 on the following [MPI Forum blessed] link is supposed to clarify any potential misunderstanding regarding the reduce_scatter. Count how many elements are on each side of the intercommunicator ;)

  george.

> The way it's implemented in MPICH2, and the way that makes this make a lot more sense to me, is that you first do intercommunicator reductions to temporary buffers on rank 0 in each group. Then rank 0 scatters within the local group. The way I had been thinking about it was to do a local reduction followed by an intercomm scatter, but that isn't what the standard is saying, AFAICS.