OMPI_COMM_WORLD_RANK can be used to get the MPI rank. For other environment
For processor affinity see this FAQ entry -
On Wed, Jul 28, 2010 at 9:04 AM, Yves Caniou <yves.caniou_at_[hidden]>wrote:
> I have some performance issue on a parallel machine composed of nodes of 16
> procs each. The application is launched on multiple of 16 procs for given
> numbers of nodes.
> I was told by people using MX MPI with this machine to attach a script to
> mpiexec, which 'numactl' things, in order to make the execution performance
> Looking on the faq (the oldest one is for OpenMPI v1.3?), I saw that maybe
> solution would be for me to use the --mca mpi_paffinity_alone 1
> Is that correct? -- BTW, I have both memory and processor affinity:
> >ompi_info | grep affinity
> MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
> MCA maffinity: libnuma (MCA v2.0, API v2.0, Component v1.4.2)
> Does it handle memory too, or do I have to use another option like
> --mca mpi_maffinity 1?
> Still, I would like to test the numactl solution. Does OpenMPI provide an
> equivalent to $MXMPI_ID which gives at least gives the NODE on which a
> process is launched by OpenMPI, so that I can adapt the script that was
> to me?
> users mailing list