Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Alltoall with Vector Datatype
From: George Bosilca (bosilca_at_[hidden])
Date: 2014-05-08 12:00:49


I think the issue is with the way you define the send and receive
buffer in the MPI_Alltoall. You have to keep in mind that the
all-to-all pattern will overwrite the entire data in the receive
buffer. Thus, starting from a relative displacement in the data (in
this case matrix[wrank*wrows]), begs for troubles, as you will write
outside the receive buffer.

  George.

On Thu, May 8, 2014 at 10:08 AM, Matthieu Brucher
<matthieu.brucher_at_[hidden]> wrote:
> The Alltoall should only return when all data is sent and received on
> the current rank, so there shouldn't be any race condition.
>
> Cheers,
>
> Matthieu
>
> 2014-05-08 15:53 GMT+02:00 Spenser Gilliland <spenser_at_[hidden]>:
>> George & other list members,
>>
>> I think I may have a race condition in this example that is masked by
>> the print_matrix statement.
>>
>> For example, lets say rank one has a large sleep before reaching the
>> local transpose, will the other ranks have completed the Alltoall and
>> when rank one reaches the local transpose it is altering the data that
>> the other processors sent it?
>>
>> Thanks,
>> Spenser
>>
>>
>> --
>> Spenser Gilliland
>> Computer Engineer
>> Doctoral Candidate
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> Music band: http://liliejay.com/
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users