Open MPI logo

Hardware Locality Users' Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Users mailing list

Subject: Re: [hwloc-users] MPI + threads parallelization
From: Ondrej Marsalek (ondrej.marsalek_at_[hidden])
Date: 2010-10-20 08:51:20


Thanks everyone for the useful information.

Ondrej

On Fri, Oct 1, 2010 at 11:02, Brice Goglin <Brice.Goglin_at_[hidden]> wrote:
>
> It mostly depends on the MPI implementation. Several of them are
> switching to hwloc for binding, so you will likely have a mpiexec option
> to do so.
>
> Otherwise, assuming mpiexec does not bind anything and you have 4 numa
> nodes, you can do it manually with something like:
>  mpiexec --np 1 hwloc-bind node:0 myprog : -np 1 hwloc-bind node:1
> myrog : -np 2 hwloc-bind node:2 myprog : -np 3 hwloc-bind node:3 myprog
> which runs 4 instances of "myprog" and bind them on different numa nodes.
>
> Brice

On Fri, Oct 1, 2010 at 12:14, Samuel Thibault <samuel.thibault_at_[hidden]> wrote:
>
> Sure. You can for instance bind each whole MPI process to NUMA nodes and
> let the system manage threads afterward, or even bind threads inside the
> process. Of course, to get coherent things, you'll need to do a bit of
> maths to bind according to the MPI rank number.
>
> Note that lstopo --top shows the bound processes (and even threads on
> Linux), which will probably useful to debug your code :)
>
> Samuel

On Fri, Oct 1, 2010 at 17:46, Dave Goodell <goodell_at_[hidden]> wrote:
> On Oct 1, 2010, at 4:02 AM CDT, Brice Goglin wrote:
>
>>
>> It mostly depends on the MPI implementation. Several of them are
>> switching to hwloc for binding, so you will likely have a mpiexec option
>> to do so.
>
> FWIW, MPICH2 supports this when using the hydra process manager: http://wiki.mcs.anl.gov/mpich2/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding
>
> Open MPI has similar functionality documented somewhere on their website, but I don't have the link handy.
>
> -Dave