Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] OpenMPI, PLPA and Linux cpuset/cgroup support
From: Chris Samuel (csamuel_at_[hidden])
Date: 2009-07-24 03:26:57

----- "Jeff Squyres" <jsquyres_at_[hidden]> wrote:

Hi Jeff,

> I'm the "primary PLPA" guy that Ralph referred to, and I was on
> vacation last week -- sorry for missing all the chatter.

No worries!

> Based on your mails, it looks like you're out this week -- so little
> will likely occur. I'm at the MPI Forum standards meeting next week,
> so my replies to email will be sporatic.

Not a problem, can quite understand.

> OMPI is pretty much directly calling PLPA to set affinity for
> "processors" 0, 1, 2, 3 (which PLPA translates into Linux virtual
> processor IDs, and then invokes sched_setaffinity with each of those
> IDs).

Cool, so it does indeed sound like something that can be
solved purely inside PLPA. That's good to know!

> Note that the EFAULT errors you're seeing in the output are
> deliberate. [...]

Great, after reading a bit more I got the impression that's
what might be going on, thanks for the confirmation!

> But as to why it's getting EINVAL, that could be wonky.
> We might want to take this to the PLPA list and have you
> run some small, non-MPI examples to ensure that PLPA is
> parsing your /sys tree properly, etc.

Not a problem.

> Ping when you get back from vacation.

I'm back Monday (which is Sunday arvo for you I think).


Christopher Samuel - (03) 9925 4751 - Systems Manager
 The Victorian Partnership for Advanced Computing
 P.O. Box 201, Carlton South, VIC 3053, Australia
VPAC is a not-for-profit Registered Research Agency