Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] slowdown with infiniband and latest CentOS kernel
From: Noam Bernstein (noam.bernstein_at_[hidden])
Date: 2013-12-17 11:16:48

On Dec 17, 2013, at 11:04 AM, Ralph Castain <rhc_at_[hidden]> wrote:

> Are you binding the procs? We don't bind by default (this will change in 1.7.4), and binding can play a significant role when comparing across kernels.
> add "--bind-to-core" to your cmd line

I've previously always used mpi_paffinity_alone=1, and the new behavior
seems to be independent of whether or not I use it. I'll try bind-to-core.

One more possible clue. I haven't done a full test, but for one
particular setup (newer nodes, single node so presumably using
sm), there are apparently two ways to fix the problem:
1. go back to the previous kernel, but stick with openmpi 1.7.3
2. stick with the new kernel, but go back to openmpi 1.6.4

So it appears to be some interaction between the new kernel and 1.7.3 that
isn't present with 1.6.4.

We specifically switched to 1.7.3 because of a bug in 1.6.4 (lock up in some
collective communication), but now I'm wondering whether I should just test


  • application/pkcs7-signature attachment: smime.p7s