Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] understanding PCI device to NUMA node connection
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2011-11-28 17:16:13


Le 28/11/2011 23:07, Guy Streeter a écrit :
> On 11/28/2011 03:45 PM, Brice Goglin wrote:
> ...
>> Current Intel platforms have 2 QPI links going to I/O hubs. Most servers
>> with many sockets (4 or more) thus have each I/O hub connected to only 2
>> processors directly, so their distance is "equal" as you say.
>>
>> However, some BIOS report invalid I/O locality information. I've never
>> seen anything correct on any server like the above actually.
> If the BIOS correctly reported the locality, how would the devices show up in
> hwloc-info and lstopo? Would there be a Group containing the 2 NUMA Nodes and
> the PCI devices?

Yes

Brice