Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Gleb Natapov (glebn_at_[hidden])
Date: 2006-12-01 09:56:13


On Fri, Dec 01, 2006 at 09:35:09AM -0500, Brock Palen wrote:
> On Dec 1, 2006, at 9:23 AM, Gleb Natapov wrote:
>
> > On Fri, Dec 01, 2006 at 04:14:31PM +0200, Gleb Natapov wrote:
> >> On Fri, Dec 01, 2006 at 11:51:24AM +0100, Peter Kjellstrom wrote:
> >>> On Saturday 25 November 2006 15:31, shaposh_at_[hidden] wrote:
> >>>> Hello,
> >>>> i cant figure out, is there a way with open-mpi to bind all
> >>>> threads on a given node to a specified subset of CPUs.
> >>>> For example, on a multi-socket multi-core machine, i want to use
> >>>> only a single core on each CPU.
> >>>> Thank You.
> >>>
> >>> This might be a bit naive but, if you spawn two procs on a dual
> >>> core dual
> >>> socket system then the linux kernel should automagically schedule
> >>> them this
> >>> way.
> >>>
> >>> I actually think this applies to most of the situations discussed
> >>> in this
> >>> thread. Explicitly assigning processes to cores may actually get
> >>> it wrong
> >>> more often than the normal linux scheduler.
> >>>
> >> If you run two single threaded ranks on the dual core dual socket
> >> node
> >> you better be placing them on the same core. Shared memory
> >> communication
> Isn't this only valid with NUMA systems? (large systems or AMD
> Opteron) The intel multicores each must communicate along the bus to
> the north-bridge and back again. So all cores have the same path to
> memory. Correct me if im wrong. Though working on this would be
> good, i dont expect all systems to stick with bus, and more and more
> will be NUMA in the future.
AFAIK Core 2 Duo has shared L2 cache so shared memory communication should be
much faster if two ranks are on the same socket. But I don't have such a
setup to test the theory.

>
> On another note for systems that use pbs (and maybe other resource
> managers) It gives out the cpus in the hostlist (hostname/0
> hostname/1 etc) Why cant OMPI read that info if its available?
>
> Im prob totally off on these comments.
>
> Brock
>
> > I mean "same socket" here and not "same core" of cause.
> >
> >> will be much faster (especially if two cores shares cache).
> >>
> >>> /Peter (who may be putting a bit too much faith in the linux
> >>> scheduler...)
> >> Linux scheduler does its best assuming the processes are
> >> unrelated. This is
> >> not the case with MPI ranks.
> >>
> >> --
> >> Gleb.
> >> _______________________________________________
> >> users mailing list
> >> users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > --
> > Gleb.
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

--
			Gleb.