Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] trouble using --mca mpi_yield_when_idle 1
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2008-12-12 16:39:31


On Dec 12, 2008, at 11:46 AM, Eugene Loh wrote:

> FWIW, I've run into the need for this a few times due to HPCC tests
> on large (>100 MPI procs) nodes or multicore systems. HPCC (among
> other things) looks at the performance of a single process while all
> other np-1 processes spinwait -- or of a single pingpong pair while
> all other np-2 processes wait. I'm not 100% sure what's going on,
> but I'm guessing that the hard spinning of waiting processes hits
> the memory system or some other resource, degrading the performance
> of working processes. This is on nodes that are not oversubscribed.

I guess I could <waving hands> see how shmem kinds of communication
could lead to this kind of bottleneck, and that increasing core counts
would magnify the effect. It would be good to understand if shmem
activity is the cause of the slowdown to know if this is a good
rationale datapoint for whether we should do blocking progress (or,
more specifically, whether we need to increase the priority of
implementing blocking progress).

-- 
Jeff Squyres
Cisco Systems