Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] collectives / #1944 progress
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-07-01 11:52:05


It looks like Eugene's and George's fixes on coll sm resolve all the
known hangs. We still have flow control issues, but that is
temporarily being solved by the coll sync component. To be clear:
running with coll_sync_barrier_before 1000 seems to resolve all known
hangs, and we think that this is good enough for v1.3.3. We should
CMR whatever is necessary to the v1.3 branch.

==> We should also default coll_sync_barrier_before to 1000 for v1.3.3
(i.e., ensure sync activates itself).

For the future, we have a two pronged plan:

1. Clean up the sm btl:
   1a. Remove all dead code.
   1b. Resize free_list_max and fifo_size MCA params to effect good
enough flow control.
   1c. Possibly: convert from FIFO's to linked lists (for future
maintenance purposes, not necessarily to fix problems).

2. Test, enable, and continue to develop the coll sm module. Using
this module will avoid the p2p unexpected message queue explosion that
we're seeing (at least for collectives with short messages). It
nominally has broadcast, barrier, reduce, and allreduce implemented.
We really only need to a) test the heck outta them, and b) add gather,
scatter, scan, and exscan to the list. All the other collective
operations have implicit synchronization and won't run into the
unbounded unexpected queue issues. The bcast loop reproducer seemed
to work fine for me with the coll sm, but it segv'ed immediately for
Ralph. So clearly some work needs to be done.

We think that these two items should be the main features for 1.3.4.

-- 
Jeff Squyres
Cisco Systems