Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] OMPI 1.6 affinity fixes: PLEASE TEST
From: Mike Dubman (mike.ompi_at_[hidden])
Date: 2012-05-30 08:19:26


attached.

On Wed, May 30, 2012 at 2:32 PM, Jeff Squyres <jsquyres_at_[hidden]> wrote:

> On May 30, 2012, at 7:20 AM, Jeff Squyres wrote:
>
> >> $hwloc-ls --of console
> >> Machine (32GB)
> >> NUMANode L#0 (P#0 16GB) + Socket L#0 + L3 L#0 (20MB) + L2 L#0 (256KB)
> + L1 L#0 (32KB) + Core L#0
> >> PU L#0 (P#0)
> >> PU L#1 (P#2)
> >> NUMANode L#1 (P#1 16GB) + Socket L#1 + L3 L#1 (20MB) + L2 L#1 (256KB)
> + L1 L#1 (32KB) + Core L#1
> >> PU L#2 (P#1)
> >> PU L#3 (P#3)
> >
> > Is this hwloc output exactly the same on both nodes?
>
>
> More specifically, can you send the lstopo xml output from each of the 2
> nodes you ran on?
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>