This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Tue, 2009-12-08 at 10:14 +0000, Number Cruncher wrote:
> Whilst MPI has traditionally been run on dedicated hardware, the rise of
> cheap multicore CPUs makes it very attractive for ISVs such as ourselves
> (http://www.cambridgeflowsolutions.com/) to build a *single* executable
> that can be run in batch mode on a dedicated cluster *or* interactively
> on a user's workstation.
> Once you've taken the pain of writing a distributed-memory app (rather
> than shared-memory/multithreaded), MPI provides a transparent API to
> cover both use cases above. *However*, at the moment, the lack of
> select()-like behaviour (instead of polling) means we have to write
> custom code to avoid hogging a workstation. A runtime-selectable
> mechanism would be perfect!
Speaking as an independent observer here (i.e. not a OMPI developer) I
don't think you'll find anyone who wouldn't view what you are asking for
as a good thing, it's something that has been and is continued to be
discussed often. I for one would love to see it, whilst as Richard says
it can increase latency it can also reduce noise so help performance on
As you say you are one of a new breed of MPI users and this feature
would most likely benefit you more than the traditional
dedicated-machine users of MPI, I expect it to become more of an issue
as MPI is adopted by a wider audience. As OpenMPI is a open-source
project the question should not be what appetite is there amongst users
but is there any one user who is both motivated enough, able to do the
work and finally not busy doing other things. I've implemented this
before and it's not an easy feature to add by any means and tends to be
very intrusive into the code-base which itself causes problems.
There was another thread on this mailing list this week where Ralph
recommended setting the yield_when_idle mca param ("--mca
yield_when_idle 1) which will cause threads to call sched_yield() when
polling. The end result here is that they will still consume 100% of
idle CPU time but then other programs want to use the CPU the MPI
processes will not hog it but rather let the other processes use as much
CPU time as they want and just spin when the CPU would otherwise be
idle. This is something I use daily and greatly increases the
responsiveness of systems which are mixing idle MPI with other
Ashley Pittman, Bath, UK.
Padb - A parallel job inspection tool for cluster computing