On Thu, Nov 8, 2012 at 11:07 AM, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> Correct. PLPA was a first attempt at a generic processor affinity solution. hwloc is a 2nd generation, much Much MUCH better solution than PLPA (we wholly killed PLPA
> after the INRIA guys designed hwloc).
We ported OGS/Grid Engine to hwloc 1.5 years ago (the original core
binding code in Grid Engine uses PLPA).
>From an API consumer (both PLPA & hwloc) point of view, some of the
important hwloc advantages are:
1) Grid Engine now can use the same piece of code for different
platforms: Linux, Solaris, AIX, Mac OS X, FreeBSD, Tru64, HP-UX,
Windows. Before with PLPA, we only have support for Linux & Solaris.
2) Support for newer CPU architectures & hardware. As the development
of PLPA stopped a few years ago, many of the newer architectures did
not get recognized properly. We switched over to hwloc when the
original Grid Engine core binding code stopped working on the AMD
Magny-Cours (Opteron 6100 series).
To be fair to PLPA, had the development continued, then it should have
no issues with those new architectures. But then, the data structures
of hwloc seem to be able to handle newer hardware components more
We now use information from hwloc to optimize job placement on AMD
Bulldozers (including Piledriver). Currently hwloc just treats each of
the Bulldozer module as 2 cores, so we still have to code a bit of
logic in the Grid Engine code to do what we need.
Open Grid Scheduler - The Official Open Source Grid Engine
>> Re: layering, I believe you are saying that the relationship to libnuma is not one where hwloc is adding higher-level functionalities to libnuma, but rather hwloc is a much improved alternative except for a few system calls it makes via libnuma out of necessity or convenience.
> Jeff Squyres
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
> users mailing list