Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Allreduce()
From: Brock Palen (brockp_at_[hidden])
Date: 2008-03-13 10:29:10

Yeah, I know what you mean about if you have to use a 'all to all'
use MPI_Alltoall() don't roll your own.

So on paper, alltoall at first glance appears to be: n*(n-1) -> n^2-
n -> n^2 (for large n).

Allreduce appears to be simplest, n point to points followed by a
bcast(). Which can be simplified to gather + bcast.

Last I knew MPI_Bcast() was log(n) and gather is (n). So for
allreduce I get:


I guess I am confused how to get alltoall() down from n^2.


Brock Palen
Center for Advanced Computing

On Mar 12, 2008, at 6:05 PM, Aurélien Bouteiller wrote:

> If you can avoid them it is better to avoid them. However it is always
> better to use a MPI_Alltoall than coding your own all to all with
> point to point, and in some algorithms you *need* to make a all to all
> communication. What you should understand by "avoid all to all" is not
> avoid MPI_alltoall, but choose a mathematic algorithm that does not
> need all to all.
> The algorithmic complexity of AllReduce is the same as AlltoAll.
> Aurelien
> Le 12 mars 08 à 17:01, Brock Palen a écrit :
>> I have always been told that calls like MPI_Barrior() MPI_Allreduce()
>> and MPI_Alltoall() should be avoided.
>> I understand MPI_Alltoall() as it goes n*(n-1) sends and thus grows
>> very very quickly. MPI_Barrior() is very latency sensitive and
>> generally is not needed in most cases I have seen it used.
>> But why MPI_Allreduce()?
>> What other functions should generally be avoided?
>> Sorry this is kinda off topic for the list :-)
>> Brock Palen
>> Center for Advanced Computing
>> brockp_at_[hidden]
>> (734)936-1985
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
> _______________________________________________
> users mailing list
> users_at_[hidden]