Open MPI logo

Hardware Locality Users' Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Users mailing list

Subject: Re: [hwloc-users] Using distances
From: Jeffrey Squyres (jsquyres_at_[hidden])
Date: 2012-04-21 07:15:15


On Apr 21, 2012, at 7:09 AM, Brice Goglin wrote:

> I assume you have the entire distance (latency) matrix between all NUMA nodes as usually reported by the BIOS.
>
> const struct hwloc_distance_s *distances = hwloc_get_whole_distance_matrix_by_type(topology, HWLOC_OBJ_NODE);
> assert(distances);
> assert(distances->latency);

Is this stored on the topology object?

I ask because we've already done stuff to ensure that there's only 1 hwloc discovery per machine. If you recall, we do that in the ORTE daemon, export it to XML, and then locally send it to each MPI process on the same machine. They, in turn, import the XML to create their own topology object.

Hence, if this distance data is already covered by the XML export/import, then I should have this data.

> Now distances->latency[a+b*distances->nbobjs] contains the latency between NUMA nodes whose *logical* indexes are a and b (it may be asymmetrical).
>
> Now get the NUMA node object close to your PUs and the NUMA objects close to each OFED device, take their ->logical_index and you'll get the latencies.

Ah, ok. This is what I didn't understand from the docs -- is there no distance to actual PCI devices? I.e., distance is only measured between NUMA nodes?

I ask because the functions allow measuring distance by depth and type -- are those effectively ignored, and really all you can check is the distance between NUMA nodes?

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/