This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I ran some very crude tests and found that things slowed down once you
got over 8 cores at a time. However, they didn't slow down by 50% if
you went to 16 processes. Sadly, the tests were so crude, I did not
keep good notes (it appears).
I'm running a gcm, so my benchmarks may not be very useful to most
folks. If there was an easy-to-compile benhmark that I could run on
my cluster, I'd be curious what the results are too.
On 11-Jul-09, at 2:16 PM, Robert Kubrick wrote:
> The Open MPI FAQ recommends not to oversubscribe the available cores
> for best performances, but is this still true? The new Nehalem
> processors are built to run 2 threads on each core. On a 8 sockets
> systems, that sums up to 128 threads that Intel claims can be run
> without significant performance degradation. I guess the last word
> is to those who have tried to run some benchmarks and applications
> on the new Intel processors. Any experience to share?
> users mailing list