Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Collective operations and synchronization
From: Eugene Loh (Eugene.Loh_at_[hidden])
Date: 2009-03-23 17:01:57

Shaun Jackman wrote:

> I've just read in the Open MPI documentation [1]

That's the MPI spec, actually.

> that collective operations, such as MPI_Allreduce, may synchronize,
> but do not necessarily synchronize. My algorithm requires a collective
> operation and synchronization; is there a better (more efficient?)
> method than simply calling MPI_Allreduce followed by MPI_Barrier?

MPI_Allreduce is a case that actually "requires" synchronization in that
no participating process may exit before all processes have entered.
So, there should be no need to add additional synchronization. A
special case might be an MPI_Allreduce of a 0-length message, in which
case I suppose an MPI implementation could simple "do nothing", and the
synchronization side-effect would be lost.

The MPI spec is mainly talking about a "typical" collective where one
could imagine a process exiting before some processes have entered.
E.g., in a broadcast or scatter, the root could exit before any other
process has entered the operation. In a reduce or gather, the root
could enter after all other processes have exited. For all-to-all,
allreduce, or allgather, however, no process can exit before all
processes have entered, which is the synchronization condition effected
by a barrier. (Again, null message lengths can change things.)

> [1]
> _______________________________________________
> users mailing list
> users_at_[hidden]