Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: optimize probe in ob1
From: George Bosilca (bosilca_at_[hidden])
Date: 2014-02-10 18:29:57


While this sounds like an optimization for highly specific application behavior, it is justifiable under some usage scenarios. I have several issues with the patch. Here are the minor ones:

1. It does modifications that are nor necessary to the patch itself (as an example removal of the static keyword from the mca_pml_ob1_comm_proc_t class instance).

2. Moving add_fragment_to_unexpected change the meaning of the code.

3. If this change get pushed in to the trunk, the only reason for the existence of last_probed disappear. Thus, the variable should disappear as well.

4. The last part of the patch is not related to this topic and should be pushed separately.

Now the most major one. With this change you alter the most performance critical piece of code, by adding a non negligible number of potential cache misses (looking for the number of elements, adding/removing an element from a queue). This deserve a careful evaluation and consideration, not only for the less likely usage pattern you describe but for the more mainstream uses.


On Feb 7, 2014, at 23:01 , Nathan Hjelm <hjelmn_at_[hidden]> wrote:

> What: The current probe algorithm in ob1 is linear with respect to the
> number or processes in the job. I wish to change the algorithm to be
> linear in the number of processes with unexpected messages. To do this I
> added an additional opal_list_t to the ob1 communicator and made the ob1
> process a list_item_t. When an unexpected message comes in on a proc it
> is added to that proc's unexpected message queue and the proc is added
> to the communicator's list of procs with unexpected messages
> (unexpected_procs) if it isn't already on that list. When matching a
> probe request this list is used to determine which procs to look at to
> find an unexpected message. The new list is protected by the matching
> lock so no extra locking is needed.
> Why: I have a benchmark that makes heavy use of MPI_Iprobe in one if its
> phases. I discovered that the primary reason this benchmark was running
> slow with Open MPI is the probe algorithm.
> When: This is another simple optimization. It only affects the
> unexpected message path and will speed up probe requests. This is
> intended to go into 1.7.5. Setting the timeout to next Tuesday (which
> gives me time to verify the improvment at scale-- 131,000 PEs).
> See the attached patch.
> -Nathan
> <iprobe_patch.patch>_______________________________________________
> devel mailing list
> devel_at_[hidden]