Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI_Allreduce()
From: Aurélien Bouteiller (bouteill_at_[hidden])
Date: 2008-03-12 18:05:14


If you can avoid them it is better to avoid them. However it is always
better to use a MPI_Alltoall than coding your own all to all with
point to point, and in some algorithms you *need* to make a all to all
communication. What you should understand by "avoid all to all" is not
avoid MPI_alltoall, but choose a mathematic algorithm that does not
need all to all.

  The algorithmic complexity of AllReduce is the same as AlltoAll.

Aurelien

Le 12 mars 08 à 17:01, Brock Palen a écrit :

> I have always been told that calls like MPI_Barrior() MPI_Allreduce()
> and MPI_Alltoall() should be avoided.
>
> I understand MPI_Alltoall() as it goes n*(n-1) sends and thus grows
> very very quickly. MPI_Barrior() is very latency sensitive and
> generally is not needed in most cases I have seen it used.
>
> But why MPI_Allreduce()?
> What other functions should generally be avoided?
>
> Sorry this is kinda off topic for the list :-)
>
> Brock Palen
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users