|hwloc v1.10.0 released|
New feature release
> Read more
|R&D engineer position|
at Inria Bordeaux (France)
> Read more
|Network Locality (netloc)|
New hwloc companion
> Read more
|hwloc tutorial material|
Slides and code available
> Read more
The Portable Hardware Locality (hwloc) software package provides a
portable abstraction (across OS, versions, architectures, ...) of the
hierarchical topology of modern architectures, including NUMA memory
nodes, sockets, shared caches, cores and simultaneous
multithreading. It also gathers various system attributes such as
cache and memory information as well as the locality of I/O devices
such as network interfaces, InfiniBand HCAs or GPUs.
It primarily aims at helping
applications with gathering information about modern computing
hardware so as to exploit it accordingly and efficiently.
Portability and support
hwloc supports the following operating systems:
- Linux (including old kernels not having sysfs topology
information, with knowledge of cpusets, offline CPUs, ScaleMP vSMP,
NumaScale NumaConnect, and Kerrighed support)
- Darwin / OS X
- FreeBSD and its variants (such as kFreeBSD/GNU)
- OSF/1 (a.k.a., Tru64)
- Microsoft Windows
- IBM BlueGene/Q Compute Node Kernel (CNK)
Additionally hwloc can detect the locality PCI devices as well as OpenCL,
CUDA and Xeon Phi accelerators, network and InfiniBand interfaces,
Since it uses standard Operating System information, hwloc's support is
almost always independent from the processor type (x86, powerpc, ia64, ...),
and just relies on the Operating System support. The only exception to this is
kFreeBSD, which does not support topology information, and hwloc thus uses an
x86-only CPUID-based backend (which could be used for other OSes too).
To check whether hwloc works on a particular machine, just try to build
it and run lstopo If some things do not look right (e.g. bogus or
missing cache information), see Questions and bugs below
hwloc may display the topology in multiple convenient formats (see
It also offers a powerful programming interface to gather information
about the hardware, bind processes, and much more.
More details are available in the Documentation
(in both PDF and HTML). The documentation for each version contains
outputs and an API interface example (these links are for v1.10.0).
The materials from several hwloc tutorials is
Getting and using hwloc
The latest hwloc releases are available on the
The GIT repository is also accessible for
Perl bindings are available from Bernd Kallies
Python bindings are available from Guy Streeter
as Fedora RPM and tarball
or within their git tree
The following software already benefit from hwloc or are being
ported to it:
- MPI implementations and tools
- Runtime systems and compilers
- The StarPU
runtime system for heterogeneous multicore architectures
- The Parallel Runtime Scheduling and Execution Controller
- The Qthreads project (site 1, site 2)
- The Rose compiler
- The ForestGOMP
OpenMP platform for hierarchical architectures
- Parallel scientific libraries
- Resource manager and job schedulers
- and even more!
How do you pronounce "hwloc"?
When in doubt, say "hardware locality."
Some of the core developers say "H. W. Loke"; others say
"H. W. Lock". We've heard several other pronunciations as well. We
don't really have a strong preference for how you say it; we
chose the name for its Google-ability, not its pronunciation.
But now at least you know how we pronounce it. :-)
Questions and bugs
Questions, comments, and bugs should be sent to hwloc mailing lists. When
appropriate, please attach the /proc + /sys tarball
generated by the installed script hwloc-gather-topology when
submitting problems about Linux, or send the
output of kstat cpu_info in the Solaris case, or the output
of sysctl hw in the Darwin or BSD cases. Also make sure you
run a recent OS (e.g. Linux kernel) and possibly a recent BIOS too
since hwloc gathers topology information from them. Passing
--enable-debug to ./configure also enables a lot of
helpful debugging information.
Also be sure to see the hwloc wiki and bug tracking
If you are looking for general-purpose hwloc citations, please use thie following one.
This paper (available here)
introduces hwloc, its goals and its implementation.
It then shows how hwloc may be used by MPI implementations and OpenMP
runtime systems as a way to carefully place processes and adapt communication
strategies to the underlying hardware.
François Broquedis, Jérôme Clet-Ortega, Stéphanie Moreaud, Nathalie Furmento, Brice Goglin, Guillaume Mercier, Samuel Thibault, and Raymond Namyst.
hwloc: a Generic Framework for Managing Hardware Affinities in HPC Applications.
In Proceedings of the 18th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP2010),
Pisa, Italia, February 2010.
IEEE Computer Society Press.
If you are looking for a citation about I/O device locality and cluster/multi-node support, please use the following one instead.
This paper (available here)
explains how I/O locality is managed in hwloc, how device details are represented,
how hwloc interacts with other libraries, and how multiple nodes such as a cluster can be efficiently managed.
Managing the Topology of Heterogeneous Cluster Nodes with Hardware Locality (hwloc).
In Proceedings of 2014 International Conference on High Performance Computing & Simulation (HPCS 2014),
Bologna, Italy, July 2014.
See also the
Open MPI publication list.
and the bottom of the
Inria hwloc research page.
History / credits
hwloc is the evolution and merger of the libtopology
project and the Portable Linux Processor Affinity
(PLPA) project. Because of functional and ideological overlap,
these two code bases and ideas were merged and released under the name
"hwloc" as an Open MPI sub-project.
libtopology was initially developed by the Inria Runtime Team-Project
(headed by Raymond
Namyst). PLPA was initially developed by the Open MPI development
team as a sub-project. Both are now deprecated in favor of hwloc,
which is distributed here as an Open MPI sub-project.
Portability tests are performed thanks to
the Inria Continuous Integration platform.