Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [hwloc-devel] hwloc to be included in RHEL 6.1
From: Jirka Hladky (jhladky_at_[hidden])
Date: 2010-11-18 10:02:50


On Thursday, November 18, 2010 03:55:35 pm Brice Goglin wrote:
> Le 18/11/2010 08:50, Jirka Hladky a écrit :
> > Hi all,
> >
> > Red Hat would like to included hwloc in the upcoming version of the Red
> > Hat Enterprise Linux 6.1. There is Bugzilla 648593
> > [RFE] Include Portable Hardware Locality (hwloc) in RHEL
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=648593
> >
> > to address this.
> >
> > I got following input from the devel:
> > =================================================
> > There appears to be a significant drawback to using hwloc. The core #
> > shown in hwloc-ls does NOT map 1:1 with the processor id in
> > /proc/cpuinfo.
> >
> > For example, on intel-s3e36-02.lab hwloc shows the core ids in socket 0
> > as {0,1,2,3,4,5,6,7}.
> >
> > /proc/cpuinfo shows these as physically being {0,4,8,12,16,20,24,28}.
> >
> > On the cmd-line, hwloc-ls does indicate a difference between the hwloc
> > core id and the physical id:
> >
> > [root_at_intel-s3e36-02 ~]# hwloc-ls
> > Machine (64GB)
> >
> > NUMANode #0 (phys=0 16GB) + Socket #0 + L3 #0 (24MB)
> >
> > L2 #0 (256KB) + L1 #0 (32KB) + Core #0 + PU #0 (phys=0)
> > L2 #1 (256KB) + L1 #1 (32KB) + Core #1 + PU #1 (phys=4)
> > L2 #2 (256KB) + L1 #2 (32KB) + Core #2 + PU #2 (phys=8)
> > L2 #3 (256KB) + L1 #3 (32KB) + Core #3 + PU #3 (phys=12)
> > L2 #4 (256KB) + L1 #4 (32KB) + Core #4 + PU #4 (phys=16)
> > L2 #5 (256KB) + L1 #5 (32KB) + Core #5 + PU #5 (phys=20)
> > L2 #6 (256KB) + L1 #6 (32KB) + Core #6 + PU #6 (phys=24)
> > L2 #7 (256KB) + L1 #7 (32KB) + Core #7 + PU #7 (phys=28)
> >
> > If you use the graphical interface, it is possible that
> > customers/GSS/everyone screws up the reporting of CPU #s.
> >
> > Possible solution: Have hwloc-ls use '-p' by default.
> > =================================================
> >
> > I'm not sure if you are open to change the default from --logical to --
> > physical. Please let me know your opinion on it. If you don't think that
> > it's a good idea perhaps you can give us arguments why you prefer
> > logical indexing over physical indexing.

Hi Brice,

> We want to keep a consistent default across the whole project. The API,
> hwloc-calc and hwloc-bind use logical by default.
I do agree.

>
> > Another point is that at the moment you cannot distinguish if the
> > graphical output (.png, X, ...) was created with lstopo --physical or
> > lstopo --logical.

> Actually, you can. Instead of "Core #0", you get "Core p#0" (this "p"
> means "physical").
Oh, you are right. I didn't notice it. For the novice user, it will be difficult
to notice the difference.

Actually, I had the same problem when I have first started lstopo. I was
wondering how these indexes match with /proc/cpuinfo indexing.

> > Could you please add the legend to the picture explaining which index was
> > used?
>
> I guess it's possible.
Oh, this would be great! Will it make it into 1.1?

Jirka