Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] High CPU usage with yield_when_idle =1 on CFS
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-09-02 07:45:17


This might also be in reference to the issue that shed_yield() really does nothing in recent Linux kernels (there was big debate about this at kernel.org).

IIRC, there's some kernel parameter that you can tweak to make it behave better, but I'm afraid I don't remember what it is. Some googling might find it...?

On Sep 1, 2011, at 10:06 PM, Eugene Loh wrote:

> On 8/31/2011 11:48 PM, Randolph Pullen wrote:
>> I recall a discussion some time ago about yield, the Completely F%’d Scheduler (CFS) and OpenMPI.
>>
>> My system is currently suffering from massive CPU use while busy waiting. This gets worse as I try to bump up user concurrency.
> Yup.
>> I am running with yield_when_idle but its not enough.
> Yup.
>> Is there anything else I can do to release some CPU resource?
>> I recall seeing one post where usleep(1) was inserted around the yields, is this still feasible?
>>
>> I'm using 1.4.1 - is there a fix to be found in upgrading?
>> Unfortunately I am stuck with the CFS as I need Linux. Currently its Ubuntu 10.10 with 2.6.32.14
> I think OMPI doesn't yet do (much/any) better than what you've observed. You might be able to hack something up yourself. In something I did recently, I replaced blocking sends and receives with test/nanosleep loops. An "optimum" solution (minimum latency, optimal performance at arbitrary levels of under and oversubscription) might be elusive, but hopefully you'll quickly be able to piece together something for your particular purposes. In my case, I was lucky and the results were very gratifying... my bottleneck vaporized for modest levels of oversubscription (2-4 more processes than processors).
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/