Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] collectives / #1944 progress
From: Eugene Loh (Eugene.Loh_at_[hidden])
Date: 2009-07-01 22:23:51

Jeff Squyres wrote:

> It looks like Eugene's and George's fixes on coll sm resolve all the
> known hangs. We still have flow control issues, but that is
> temporarily being solved by the coll sync component. To be clear:
> running with coll_sync_barrier_before 1000 seems to resolve all known
> hangs, and we think that this is good enough for v1.3.3. We should
> CMR whatever is necessary to the v1.3 branch.
> ==> We should also default coll_sync_barrier_before to 1000 for
> v1.3.3 (i.e., ensure sync activates itself).
> For the future, we have a two pronged plan:

I suspect the standard procedure is that we all look quickly at this
e-mail message, file appropriately, and then resume our normal lives.
Yes? Or, is such a plan put somehow into place?

> 1. Clean up the sm btl:
> 1a. Remove all dead code.

What do you mean here? (Possibly you mean getting rid of sm pending
sends if we implement 1b properly, but I'm not sure.)

> 1b. Resize free_list_max and fifo_size MCA params to effect good
> enough flow control.
> 1c. Possibly: convert from FIFO's to linked lists (for future
> maintenance purposes, not necessarily to fix problems).

Another idea is to have two kinds of FIFOs. One is just for returning
fragments. The other is for in-coming message fragments. It would even
seem as though one would no longer need "free lists", but just use the
ack FIFO to manage fragments. (ALL of this is complicated by the fact
that we have two kinds of fragments, eager and max, but fortunately
those details can be pushed onto the sorry fool who'll be implementing
all this. I wonder who that'll be.)

> 2. Test, enable, and continue to develop the coll sm module. Using
> this module will avoid the p2p unexpected message queue explosion
> that we're seeing (at least for collectives with short messages).
> It nominally has broadcast, barrier, reduce, and allreduce
> implemented. We really only need to a) test the heck outta them, and
> b) add gather, scatter, scan, and exscan to the list. All the other
> collective operations have implicit synchronization and won't run
> into the unbounded unexpected queue issues. The bcast loop
> reproducer seemed to work fine for me with the coll sm, but it
> segv'ed immediately for Ralph. So clearly some work needs to be done.
> We think that these two items should be the main features for 1.3.4.