On 20:30 Tue 25 Mar , Ross Boylan wrote:
> Even when "idle", MPI processes use all the CPU. I thought I remember
> someone saying that they will be low priority, and so not pose much of
> an obstacle to other uses of the CPU.
well, if they're blocking in an MPI call, then they'll be doing a busy
wait, so each thread will easily churn up 100% CPU load.
> At any rate, my question is whether, if I have processes that spend most
> of their time waiting to receive a message, I can put more of them than
> I have physical cores without much slowdown?
AFAICS there will always be a certain slowdown. Is there a reason why
you would want to oversubscribe your nodes?
> Does it make any difference if there's hyperthreading with, e.g., 16
> virtual CPUs based on 8 physical ones? In general I try to limit to the
> number of physical cores.
That depends much on the code. If the additional threads run a
different instruction mix, then you might be able to squeeze out some
additional performance by adding more than the original 8 threads. But
I've also seen codes which actually run slower with SMT
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
This is Bunny. Copy and paste Bunny into your
signature to help him gain world domination!