On Jan 7, 2011, at 5:27 AM, John Hearns wrote:
> Actually, the topic of hyperthreading is interesting, and we should
> discuss it please.
> Hyperthreading is supposedly implemented better and 'properly' on
> Nehalem - I would be interested to see some genuine
> performance measurements with hyperthreading on/off on your machine Gilbert.
FWIW, from what I've seen, and from the recommendations I've heard from Intel, using hyperthreading is still a hit-or-miss proposition with HPC apps. It's true that Nehalem (and later) hyperthreading is much better than it was before. But hyperthreading is still designed to support apps that stall frequently (so the other hyperthread(s) can take over and do useful work while one is stalled). Good HPC apps don't stall much, so hyperthreading still isn't a huge win.
Nehalem (and later) hyperthreading has been discussed on this list at least once or twice before; google through the archives to see if you can dig up the conversations. I have dim recollections of people sending at least some performance numbers...? (I could be wrong here, though)
> Also you don;t need to reboot and change BIOS settings - there was a
> rather niofty technique on this list I think,
> where you disable every second CPU in Linux - which has the same
> effect as switching off hyperthreading.
Yes, you can disable all but one hyperthread on a processor in Linux by:
# echo 0 > /sys/devices/system/cpu/cpuX/online
where X is an integer from the set listed in hwloc's lstopo output from the P# numbers (i.e., the OS index values, as opposed to the logical index values). Repeat for the 2nd P# value on each core in your machine. You can run lstopo again to verify that they went offline. You can "echo 1" to the same file to bring it back online.
Note that you can't offline X=0.
Note that this technique technically doesn't disable each hyperthread; it just causes Linux to avoid scheduling on it. Disabling hyperthreading in the BIOS is slightly different; you are actually physically disabling all but one thread per core.
The difference is in how resources in a core are split between hyperthreads. When you disable hyperthreading in the BIOS, all the resources in the core are given to the first hyperthread and the 2nd is deactivated (i.e., the OS doesn't even see it at all). When hyperthreading is enabled in the BIOS, the core resources are split between all hyperthreads.
Specifically: causing the OS to simply not schedule on all but the first hyperthread doesn't give those resources back to the first hyperthread; it just effectively ignores all but the first hyperthread.
My understanding is that hyperthreading can only be activated/deactivated at boot time -- once the core resources are allocated to hyperthreads, they can't be changed while running.
Whether disabling the hyperthreads or simply telling Linux not to schedule on them makes a difference performance-wise remains to be seen. I've never had the time to do a little benchmarking to quantify the difference. If someone could rustle up a few cycles (get it?) to test out what the real-world performance difference is between disabling hyperthreading in the BIOS vs. telling Linux to ignore the hyperthreads, that would be awesome. I'd love to see such results.
My personal guess is that the difference is in the noise. But that's a guess.
For corporate legal information go to: