Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] OpenMPI providing rank?
From: Yves Caniou (yves.caniou_at_[hidden])
Date: 2010-07-28 01:18:02


Le Wednesday 28 July 2010 06:03:21 Nysal Jan, vous avez écrit :
> OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other environment
> variables -
> http://www.open-mpi.org/faq/?category=running#mpi-environmental-variables

Are processes affected to nodes sequentially, so that I can get the NODE
number from $OMPI_COMM_WORLD_RANK modulo the number of proc per node?

> For processor affinity see this FAQ entry -
> http://www.open-mpi.org/faq/?category=all#using-paffinity

Thank you, but that's where I had the information that I put in my previous
mail, so it doesn't answer to my question.

.Yves.

> --Nysal
>
> On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.caniou_at_[hidden]>wrote:
> > Hi,
> >
> > I have some performance issue on a parallel machine composed of nodes of
> > 16 procs each. The application is launched on multiple of 16 procs for
> > given numbers of nodes.
> > I was told by people using MX MPI with this machine to attach a script to
> > mpiexec, which 'numactl' things, in order to make the execution
> > performance stable.
> >
> > Looking on the faq (the oldest one is for OpenMPI v1.3?), I saw that
> > maybe the
> > solution would be for me to use the --mca mpi_paffinity_alone 1
> >
> > Is that correct? -- BTW, I have both memory and processor affinity:
> > >ompi_info | grep affinity
> >
> > MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> > MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
> > MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
> > Does it handle memory too, or do I have to use another option like
> > --mca mpi_maffinity 1?
> >
> > Still, I would like to test the numactl solution. Does OpenMPI provide an
> > equivalent to $MXMPI_ID which gives at least gives the NODE on which a
> > process is launched by OpenMPI, so that I can adapt the script that was
> > given
> > to me?
> >
> > Tkx.
> >
> > .Yves.
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Yves Caniou
Associate Professor at Université Lyon 1,
Member of the team project INRIA GRAAL in the LIP ENS-Lyon,
Délégation CNRS in Japan French Laboratory of Informatics (JFLI),
  * in Information Technology Center, The University of Tokyo,
    2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-8658, Japan
    tel: +81-3-5841-0540
  * in National Institute of Informatics
    2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan
    tel: +81-3-4212-2412 
http://graal.ens-lyon.fr/~ycaniou/