Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Collective operations and synchronization
From: Eugene Loh (Eugene.Loh_at_[hidden])
Date: 2009-03-23 17:01:57


Shaun Jackman wrote:

> I've just read in the Open MPI documentation [1]

That's the MPI spec, actually.

> that collective operations, such as MPI_Allreduce, may synchronize,
> but do not necessarily synchronize. My algorithm requires a collective
> operation and synchronization; is there a better (more efficient?)
> method than simply calling MPI_Allreduce followed by MPI_Barrier?

MPI_Allreduce is a case that actually "requires" synchronization in that
no participating process may exit before all processes have entered.
So, there should be no need to add additional synchronization. A
special case might be an MPI_Allreduce of a 0-length message, in which
case I suppose an MPI implementation could simple "do nothing", and the
synchronization side-effect would be lost.

The MPI spec is mainly talking about a "typical" collective where one
could imagine a process exiting before some processes have entered.
E.g., in a broadcast or scatter, the root could exit before any other
process has entered the operation. In a reduce or gather, the root
could enter after all other processes have exited. For all-to-all,
allreduce, or allgather, however, no process can exit before all
processes have entered, which is the synchronization condition effected
by a barrier. (Again, null message lengths can change things.)

> [1] http://www.mpi-forum.org/docs/mpi21-report-bw/node85.htm
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users