Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] slowdown with infiniband and latest CentOS kernel
From: Noam Bernstein (noam.bernstein_at_[hidden])
Date: 2013-12-19 08:33:46

On Dec 18, 2013, at 5:19 PM, Martin Siegert <siegert_at_[hidden]> wrote:
> Thanks for figuring this out. Does this work for 1.6.x as well?
> The FAQ
> covers versions 1.2.x to 1.5.x.
> Does 1.6.x support mpi_paffinity_alone = 1 ?
> I set this in openmpi-mca-params.conf but
> # ompi_info | grep affinity
> MPI extensions: affinity example
> MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
> MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
> MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
> does not give any indication that this is actually used.

I never checked actual bindings with hwloc-ps or anything like that,
but as far as I can tell, 1.6.4 had consistently high performance when I
used mpi_paffinity_alone=1, and slowdowns of up to a factor of ~2
when I didn't. 1.7.3 with the old kernel never showed extreme slowdowns,
but we didn't benchmark it carefully, so it's conceivable it had minor
(same factor of 2) slowdowns. With the new kernel 1.7.3 would
show slowdowns between a factor of 2 and maybe 20 (paffinity definitely
did nothing) , and "--bind-to core" restored consistent performance.


  • application/pkcs7-signature attachment: smime.p7s