Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Some Questions on Building OMPI on Linux Em64t
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-05-27 08:11:00

On May 26, 2010, at 3:32 PM, Michael E. Thomadakis wrote:

> How do you handle thread/task and memory affinity? Do you pass the requested affinity desires to the batch scheduler and them let it issue the specific placements for threads to the nodes ?

Not as of yet, no. At the moment, Open MPI only obeys its own affinity settings, usually passed via mpirun (see mpirun(1)).

> This is something we are concerned as we are running multiple jobs on same node and we don't want to oversubscribe cores by binding there threads inadvertandly.
> Looking at ompi_info
> $ ompi_info | grep -i aff
> MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
> does this mean we have the full affinity support included or do I need to involve HWLOC in any way ?

Yes, Open MPI processes can bind themselves to sockets / cores. The 1.4 series uses PLPA behind the scenes for processor affinity stuff (the first_use stuff is for memory affinity). The 1.5 series will eventually use hwloc (we just recently imported it into our development trunk, but it's still "soaking" before moving over to the v1.5 branch (we've found at least one minor problem so far). It'll likely be there for the v1.5.1 series.

That being said, you can certainly ignore OMPI's intrinsic binding capabilities and use a standalone program like hwloc-bind or taskset to bind MPI processes.

Jeff Squyres
For corporate legal information go to: