Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_Allreduce()
From: Aurélien Bouteiller (bouteill_at_[hidden])
Date: 2008-03-12 18:05:14


If you can avoid them it is better to avoid them. However it is always
better to use a MPI_Alltoall than coding your own all to all with
point to point, and in some algorithms you *need* to make a all to all
communication. What you should understand by "avoid all to all" is not
avoid MPI_alltoall, but choose a mathematic algorithm that does not
need all to all.

  The algorithmic complexity of AllReduce is the same as AlltoAll.

Aurelien

Le 12 mars 08 à 17:01, Brock Palen a écrit :

> I have always been told that calls like MPI_Barrior() MPI_Allreduce()
> and MPI_Alltoall() should be avoided.
>
> I understand MPI_Alltoall() as it goes n*(n-1) sends and thus grows
> very very quickly. MPI_Barrior() is very latency sensitive and
> generally is not needed in most cases I have seen it used.
>
> But why MPI_Allreduce()?
> What other functions should generally be avoided?
>
> Sorry this is kinda off topic for the list :-)
>
> Brock Palen
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users