Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] slowdown with infiniband and latest CentOS kernel
From: Martin Siegert (siegert_at_[hidden])
Date: 2013-12-18 17:19:35


Hi,

expanding on Noam's problem a bit ...

On Wed, Dec 18, 2013 at 10:19:25AM -0500, Noam Bernstein wrote:
> Thanks to all who answered my question. The culprit was an interaction between
> 1.7.3 not supporting mpi_paffinity_alone (which we were using previously) and the new
> kernel. Switching to --bind-to core (actually the environment variable
> OMPI_MCA_hwloc_base_binding_policy=core) fixed the problem.
>
> Noam

Thanks for figuring this out. Does this work for 1.6.x as well?
The FAQ http://www.open-mpi.org/faq/?category=tuning#using-paffinity
covers versions 1.2.x to 1.5.x.
Does 1.6.x support mpi_paffinity_alone = 1 ?
I set this in openmpi-mca-params.conf but

# ompi_info | grep affinity
          MPI extensions: affinity example
           MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
           MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)

does not give any indication that this is actually used.

Cheers,
Martin

-- 
Martin Siegert
WestGrid/ComputeCanada
Simon Fraser University