Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [hwloc-devel] understanding PCI device to NUMA node connection
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2011-11-28 17:16:13


Le 28/11/2011 23:07, Guy Streeter a écrit :
> On 11/28/2011 03:45 PM, Brice Goglin wrote:
> ...
>> Current Intel platforms have 2 QPI links going to I/O hubs. Most servers
>> with many sockets (4 or more) thus have each I/O hub connected to only 2
>> processors directly, so their distance is "equal" as you say.
>>
>> However, some BIOS report invalid I/O locality information. I've never
>> seen anything correct on any server like the above actually.
> If the BIOS correctly reported the locality, how would the devices show up in
> hwloc-info and lstopo? Would there be a Group containing the 2 NUMA Nodes and
> the PCI devices?

Yes

Brice