Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] EXTERNAL: Re: Best way to map MPI processes to sockets?
From: Ralph Castain (rhc_at_[hidden])
Date: 2012-11-08 11:07:41


I gather from your other emails you are using 1.4.3, yes? I believe that
has npersocket as an option. If so, you could do:

mpirun -npersocket 2 -bind-to-socket ...

That would put two processes in each socket, bind them to that socket, and
rank them in series. So ranks 0-1 would be bound to the first socket, ranks
2-3 to the second.

Ralph

On Thu, Nov 8, 2012 at 6:52 AM, Blosch, Edwin L <edwin.l.blosch_at_[hidden]>wrote:

> Yes it is a Westmere system.
>
> Socket L#0 (P#0 CPUModel="Intel(R) Xeon(R) CPU E7- 8870 @ 2.40GHz"
> CPUType=x86_64)
> L3Cache L#0 (size=30720KB linesize=64 ways=24)
> L2Cache L#0 (size=256KB linesize=64 ways=8)
> L1dCache L#0 (size=32KB linesize=64 ways=8)
> L1iCache L#0 (size=32KB linesize=64 ways=4)
> Core L#0 (P#0)
> PU L#0 (P#0)
> L2Cache L#1 (size=256KB linesize=64 ways=8)
> L1dCache L#1 (size=32KB linesize=64 ways=8)
> L1iCache L#1 (size=32KB linesize=64 ways=4)
> Core L#1 (P#1)
> PU L#1 (P#1)
>
> So I guess each core has its own L1 and L2 caches. Maybe I shouldn't care
> where or if the MPI processes are bound within a socket; if I can test it,
> that will be good enough for me.
>
> So my initial question is now changed to:
>
> What is the best/easiest way to get this mapping? Rankfile?,
> --cpus-per-proc 2 --bind-to-socket, or something else?
>
> RANK SOCKET CORE
> 0 0 unspecified
> 1 0 unspecified
> 2 1 unspecified
> 3 1 unspecified
>
>
> Thanks
>
> -----Original Message-----
> From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On
> Behalf Of Brice Goglin
> Sent: Wednesday, November 07, 2012 6:17 PM
> To: users_at_[hidden]
> Subject: EXTERNAL: Re: [OMPI users] Best way to map MPI processes to
> sockets?
>
> What processor and kernel is this? (see /proc/cpuinfo, or run "lstopo -v"
> and look for attributes on the Socket line) You're hwloc output looks like
> an Intel Xeon Westmere-EX (E7-48xx or E7-88xx).
> The likwid output is likely wrong (maybe confused by the fact that
> hardware threads are disabled).
>
> Brice
>
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>