Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] understanding PCI device to NUMA node connection
From: Guy Streeter (streeter_at_[hidden])
Date: 2011-11-28 17:07:53


On 11/28/2011 03:45 PM, Brice Goglin wrote:
...
> Current Intel platforms have 2 QPI links going to I/O hubs. Most servers
> with many sockets (4 or more) thus have each I/O hub connected to only 2
> processors directly, so their distance is "equal" as you say.
>
> However, some BIOS report invalid I/O locality information. I've never
> seen anything correct on any server like the above actually.

If the BIOS correctly reported the locality, how would the devices show up in
hwloc-info and lstopo? Would there be a Group containing the 2 NUMA Nodes and
the PCI devices?

>
> Yes, unfortunately PCI detection isn't based on reading files, so
> there's no easy way to "dump" it during gather-topology.sh.

I knew this once, and remembered it when you explained it.

thanks,
--Guy