Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OpenMPI providing rank?
From: Yves Caniou (yves.caniou_at_[hidden])
Date: 2010-07-28 09:11:29


Le Wednesday 28 July 2010 15:05:28, vous avez écrit :
> I am confused. I thought all you wanted to do is report out the binding of
> the process - yes? Are you trying to set the affinity bindings yourself?
>
> If the latter, then your script doesn't do anything that mpirun wouldn't
> do, and doesn't do it as well. You would be far better off just adding
> --bind-to-core to the mpirun cmd line.

"mpirun -h" says that it is the default, so there is not even something to do?
I don't even have to add "--mca mpi_paffinity_alone 1" ?

.Yves.

> On Jul 28, 2010, at 6:37 AM, Yves Caniou wrote:
> > Le Wednesday 28 July 2010 11:34:13 Ralph Castain, vous avez écrit :
> >> On Jul 27, 2010, at 11:18 PM, Yves Caniou wrote:
> >>> Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit :
> >>>> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other
> >>>> environment variables -
> >>>> http://www.open-mpi.org/faq/?category=running#mpi-environmental-variab
> >>>>le s
> >>>
> >>> Are processes affected to nodes sequentially, so that I can get the
> >>> NODE number from $OMPI_COMM_WORLD_RANK modulo the number of proc per
> >>> node?
> >>
> >> By default, yes. However, you can select alternative mapping methods.
> >>
> >> Or...you could just use the mpirun cmd line option to report the binding
> >> of each process as it is started :-)
> >>
> >> Do "mpirun -h" to see all the options. The one you want is
> >> --report-bindings
> >
> > It reports to stderr, so the $OMPI_COMM_WORLD_RANK modulo the number of
> > proc per nodes seems more appropriate for what I need, right?
> >
> > So is the following valid to put memory affinity?
> >
> > script.sh:
> > MYRANK=$OMPI_COMM_WORLD_RANK
> > MYVAL=$(expr $MYRANK / 4)
> > NODE=$(expr $MYVAL % 4)
> > numactl --cpunodebind=$NODE --membind=$NODE $@
> >
> > mpiexec ./script.sh -n 128 myappli myparam
> >
> >>>> For processor affinity see this FAQ entry -
> >>>> http://www.open-mpi.org/faq/?category=all#using-paffinity
> >>>
> >>> Thank you, but that's where I had the information that I put in my
> >>> previous mail, so it doesn't answer to my question.
> >>
> >> Memory affinity is taken care of under-the-covers when paffinity is
> >> active. No other options are required.
> >
> > Which is better: using this option, or the cmd line with numactl (if it
> > works)? What is the difference?
> >
> > Tkx.
> >
> > .Yves.
> >
> >>> .Yves.
> >>>
> >>>> --Nysal
> >>>>
> >>>> On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou
> >
> > <yves.caniou_at_[hidden]>wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I have some performance issue on a parallel machine composed of nodes
> >>>>> of 16 procs each. The application is launched on multiple of 16 procs
> >>>>> for given numbers of nodes.
> >>>>> I was told by people using MX MPI with this machine to attach a
> >>>>> script to mpiexec, which 'numactl' things, in order to make the
> >>>>> execution performance stable.
> >>>>>
> >>>>> Looking on the faq (the oldest one is for OpenMPI v1.3?), I saw that
> >>>>> maybe the
> >>>>> solution would be for me to use the --mca mpi_paffinity_alone 1
> >>>>>
> >>>>> Is that correct? -- BTW, I have both memory and processor affinity:
> >>>>>> ompi_info | grep affinity
> >>>>>
> >>>>> MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> >>>>> MCA maffinity: first_use (MCA v2.0, API v2.0, Component
> >>>>> v1.4.2) MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
> >>>>> Does it handle memory too, or do I have to use another option like
> >>>>> --mca mpi_maffinity 1?
> >>>>>
> >>>>> Still, I would like to test the numactl solution. Does OpenMPI
> >>>>> provide an equivalent to $MXMPI_ID which gives at least gives the
> >>>>> NODE on which a process is launched by OpenMPI, so that I can adapt
> >>>>> the script that was given
> >>>>> to me?
> >>>>>
> >>>>> Tkx.
> >>>>>
> >>>>> .Yves.
> >>>>> _______________________________________________
> >>>>> users mailing list
> >>>>> users_at_[hidden]
> >>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>>
> >>> _______________________________________________
> >>> users mailing list
> >>> users_at_[hidden]
> >>> http://www.open-mpi.org/mailman/listinfo.cgi/users