Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [hwloc-devel] Fwd: [OMPI devel] 0.9.1rc2 is available
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-10-21 21:09:30


Arrgh. I posted to the "0.9.1rc2 is available" notice to the wrong
list (stupid mail client autocomplete...).

But Chris Samuel replied with 2 mails (the first is below) with some
results:

Begin forwarded message:

> From: "Chris Samuel" <csamuel_at_[hidden]>
> Date: October 21, 2009 7:29:36 PM EDT
> To: "Open MPI Developers" <devel_at_[hidden]>
> Subject: Re: [OMPI devel] 0.9.1rc2 is available
> Reply-To: "Open MPI Developers" <devel_at_[hidden]>
>
>
> ----- "Jeff Squyres" <jsquyres_at_[hidden]> wrote:
>
> > Give it a whirl:
>
> Nice - built without warnings with GCC 4.4.2.
>
> Some sample results below for configs not represented
> on the current website.
>
>
> Dual socket Shanghai:
>
> System(31GB)
> Node#0(15GB) + Socket#0 + L3(6144KB)
> L2(512KB) + L1(64KB) + Core#0 + P#0
> L2(512KB) + L1(64KB) + Core#1 + P#1
> L2(512KB) + L1(64KB) + Core#2 + P#2
> L2(512KB) + L1(64KB) + Core#3 + P#3
> Node#1(16GB) + Socket#1 + L3(6144KB)
> L2(512KB) + L1(64KB) + Core#0 + P#4
> L2(512KB) + L1(64KB) + Core#1 + P#5
> L2(512KB) + L1(64KB) + Core#2 + P#6
> L2(512KB) + L1(64KB) + Core#3 + P#7
>
>
> Dual socket single core Opteron:
>
> System(3961MB)
> Node#0(2014MB) + Socket#0 + L2(1024KB) + L1(1024KB) + Core#0 + P#0
> Node#1(2017MB) + Socket#1 + L2(1024KB) + L1(1024KB) + Core#0 + P#1
>
>
> Dual socket, dual core Power5 (SMT disabled) running SLES9
> (2.6.9 based kernel):
>
> System(15GB)
> Node#0(7744MB)
> P#0
> P#2
> Node#1(8000MB)
> P#4
> P#6
>
>
> Inside a single CPU Torque job (using cpusets) on a dual socket
> Shanghai:
>
> System(31GB)
> Node#0(15GB) + Socket#0 + L3(6144KB) + L2(512KB) + L1(64KB) +
> Core#0 + P#0
> Node#1(16GB)
>
>
> --
> Christopher Samuel - (03) 9925 4751 - Systems Manager
> The Victorian Partnership for Advanced Computing
> P.O. Box 201, Carlton South, VIC 3053, Australia
> VPAC is a not-for-profit Registered Research Agency
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>

-- 
Jeff Squyres
jsquyres_at_[hidden]