Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] OpenMPI, PLPA and Linux cpuset/cgroup support
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-07-22 10:57:26

I'm the "primary PLPA" guy that Ralph referred to, and I was on
vacation last week -- sorry for missing all the chatter.

Based on your mails, it looks like you're out this week -- so little
will likely occur. I'm at the MPI Forum standards meeting next week,
so my replies to email will be sporatic.

OMPI is pretty much directly calling PLPA to set affinity for
"processors" 0, 1, 2, 3 (which PLPA translates into Linux virtual
processor IDs, and then invokes sched_setaffinity with each of those

Note that the EFAULT errors you're seeing in the output are
deliberate. PLPA has to "probe" the kernel to see what flavor of API
it uses. Based on the error codes that comes back, it knows which
flavor to use when actually invoking the syscall for
sched_setaffinity. So you can ignore those EFAULT's.

But as to why it's getting EINVAL, that could be wonky. We might want
to take this to the PLPA list and have you run some small, non-MPI
examples to ensure that PLPA is parsing your /sys tree properly, etc.

Ping when you get back from vacation.

On Jul 19, 2009, at 8:14 PM, Chris Samuel wrote:

> ----- "Ralph Castain" <rhc_at_[hidden]> wrote:
> > Should just be
> >
> > -mca paffinity_base_verbose 5
> >
> > Any value greater than 4 should turn it "on"
> Yup, that's what I was trying, but couldn't get any output.
> > Something I should have mentioned. The paffinity_base_service.c
> file
> > is solely used by the rank_file syntax. It has nothing to do with
> > setting mpi_paffinity_alone and letting OMPI self-determine the
> > process-to-core binding.
> That would explain why I'm not seeing any output from it
> then, it and the solaris module are the only ones containing
> any opal_output() statements in the paffinity section of MCA.
> I'll try scattering some opal_output()'s into the linux module
> instead along the same lines as the base module.
> > You want to dig into the linux module code that calls down
> > into the plpa. The same mca param should give you messages
> > from the module, and -might- give you messages from inside
> > plpa (not sure of the latter).
> The PLPA output is not run time selectable:
> #if defined(PLPA_DEBUG) && PLPA_DEBUG && 0
> :-)
> cheers,
> Chris
> --
> Christopher Samuel - (03) 9925 4751 - Systems Manager
> The Victorian Partnership for Advanced Computing
> P.O. Box 201, Carlton South, VIC 3053, Australia
> VPAC is a not-for-profit Registered Research Agency
> _______________________________________________
> devel mailing list
> devel_at_[hidden]

Jeff Squyres