Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] How to replace --cpus-per-proc by --map-by
From: Ralph Castain (rhc.openmpi_at_[hidden])
Date: 2014-03-27 15:06:32


Agreed - Jeff and I discussed this just this morning. I will be updating FAQ soon

Sent from my iPhone

> On Mar 27, 2014, at 9:24 AM, Gus Correa <gus_at_[hidden]> wrote:
>
> <\begin hijacking this thread>
>
> I second Saliya's thanks to Tetsuya.
> I've been following this thread, to learn a bit more about
> how to use hardware locality with OpenMPI effectively.
> [I am still using "--bycore"+"--bind-to-core" in most cases,
> and "--cpus-per-proc" occasionally when in hybrid MPI+OpenMP mode.]
>
> When it comes to hardware locality,
> the syntax and the functionality has changed fast and significantly
> in the recent past.
> Hence, it would be great if the OpenMPI web page could provide pointers
> for the type of external documentation that Tetsuya just sent.
> Perhaps also some additional guidelines and comments
> on what is available on each release/series of OpenMPI,
> and how to use these options.
>
> There is some material about hwloc,
> but I can't see much about lama ( which means "mud" in my
> first language :) ).
> We can hardly learn things like that from the mpiexec man page
> alone, although it has very good examples.
>
> Thank you,
> Gus Correa
>
> <\end hijacking of this thread>
>
>> On 03/27/2014 11:38 AM, Saliya Ekanayake wrote:
>> Thank you, this is really helpful.
>>
>> Saliya
>>
>>
>> On Thu, Mar 27, 2014 at 5:11 AM, <tmishima_at_[hidden]
>> <mailto:tmishima_at_[hidden]>> wrote:
>>
>>
>>
>> Mapping and binding is related to so called process affinity.
>> It's a bit difficult for me to explain ...
>>
>> So please see this URL below(especially the first half part
>> of it - from 1 to 20 pages):
>> http://www.slideshare.net/jsquyres/open-mpi-explorations-in-process-affinity-eurompi13-presentation
>>
>> Although these slides by Jeff are the explanation for LAMA,
>> which is another mapping system installed in the openmpi-1.7
>> series, I guess you can easily understand what is mapping and
>> binding in general terms.
>>
>> Tetsuya
>>
>> > Thank you Tetsuya - it worked.
>> >
>> > Btw. what's the difference between mapping and binding? I think I
>> am bit
>> confused here.
>> >
>> > Thank you,
>> > Saliya
>> >
>> >
>> > On Thu, Mar 27, 2014 at 4:19 AM, <tmishima_at_[hidden]
>> <mailto:tmishima_at_[hidden]>>wrote:
>> >
>> >
>> > Hi Saliya,
>> >
>> > What you want to do is map-by node. So please try below:
>> >
>> > -np 2 --map-by node:pe=4 --bind-to core
>> >
>> > You might not need to add --bind-to core, because it's default
>> binding.
>> >
>> > Tetsuya
>> >
>> > > Hi,
>> > >
>> > > I see in v.1.7.5rc5 --cpus-per-proc is deprecated and is advised to
>> > replace by --map-by <obj>:PE=N.
>> > > I've tried this but I couldn't get the expected allocation of
>> procs.
>> > >
>> > > For example I was running 2 procs on 2 nodes each with 2
>> sockets where
>> a
>> > socket has 4 cores. I wanted 1 proc per node and bound to all
>> cores in
>> one
>> > of the sockets. I could get this by using
>> > >
>> > > --bind-to core: --map-by ppr:1:node --cpus-per-proc 4 -np 2
>> > >
>> > > Then it'll show bindings as
>> > >
>> > > [i51:32274] MCW rank 0 bound to socket 0[core 0[hwt 0]], socket
>> 0[core
>> 1
>> > [hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]:
>> > [B/B/B/B][./././.]
>> > > [i52:31765] MCW rank 1 bound to socket 0[core 0[hwt 0]], socket
>> 0[core
>> 1
>> > [hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]:
>> > [B/B/B/B][./././.]
>> > >
>> > >
>> > > Is there a better way without using -cpus-per-proc as suggested
>> to get
>> > the same effect?
>> > >
>> > > Thank you,
>> > > Saliya
>> > >
>> > >
>> > >
>> > > --
>> > > Saliya Ekanayake esaliya_at_[hidden] <mailto:esaliya_at_[hidden]>
>> > > Cell 812-391-4914 <tel:812-391-4914> Home 812-961-6383
>> <tel:812-961-6383>
>> > > http://saliya.org_______________________________________________
>> > > users mailing list
>> > >
>> users_at_[hidden]http://www.open-mpi.org/mailman/listinfo.cgi/users
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> >
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden] <mailto:users_at_[hidden]>
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> >
>> >
>> > --
>> > Saliya Ekanayake esaliya_at_[hidden] <mailto:esaliya_at_[hidden]>
>> > Cell 812-391-4914 <tel:812-391-4914> Home 812-961-6383
>> <tel:812-961-6383>
>> > http://saliya.org_______________________________________________
>> > users mailing list
>> >
>> users_at_[hidden]http://www.open-mpi.org/mailman/listinfo.cgi/users
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden] <mailto:users_at_[hidden]>
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>>
>> --
>> Saliya Ekanayake esaliya_at_[hidden] <mailto:esaliya_at_[hidden]>
>> Cell 812-391-4914 Home 812-961-6383
>> http://saliya.org
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users