Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: Remove all other paffinity components
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-05-17 17:26:33

On May 15, 2010, at 4:39 PM, Ralph Castain wrote:

> So, to ensure I understand, you are proposing that we completely eliminate the paffinity framework and commit to hwloc in its place?

I think there's 2 issues here:

- topology information
- binding

hwloc supports both. paffinity mainly supports binding; it also supports some minor socket/core mapping information stuff, but mainly as a means to support binding better. hwloc's topology information is far more complete than paffinity's.

How about this? (and this is very half-baked)

- commit hwloc to opal/hwloc; the entire tree can call it
  - it's still TBD how to compile this out (e.g., for embedded environments)
  - it *may* need something like #if OPAL_HAVE_HWLOC
- split paffinity into two frameworks (because some OS's support one and not the other):
  - binding: just for getting and setting processor affinity
  - hwmap: just for mapping (board, socket, core, hwthread) <--> OS processor ID

In this way, if hwloc ever dies, we can still have OS-specific plugins for these two things, and the #if OPAL_HAVE_HWLOC will be 0.

hwloc provides a very rich API for traversing the topology information; I don't think the main OPAL/ORTE/OMPI code base necessarily needs all of that functionality for the general case -- i.e., the binding/hwmap information (e.g., just want to bind a process to (board 1, socket 3, core 2, hwthread 1)).

Anything that needs the detailed hwloc information (e.g., tuning the sm btl based on cache sizes reported by hwloc) can use #if OPAL_HAVE_HWLOC to protect itself.

Jeff Squyres
For corporate legal information go to: