Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] slowdown with infiniband and latest CentOS kernel
From: Dave Love (d.love_at_[hidden])
Date: 2014-03-04 05:43:54


Bernd Dammann <bd_at_[hidden]> writes:

> We use Moab/Torque, so we could use cpusets (but that has had some
> other side effects earlier, so we did not implement it in our setup).

I don't know remember Torque does, but core binding and (Linux) cpusets
are somewhat orthogonal. While a cpuset will obviously restrict the
processes somewhat, it won't provide the necessary binding (at least
unless the resource manager launches the processes and uses a cpuset for
each).

> Regardless of that, it looks strange to me, that this combination of
> kernel and OMPI has such a negative side effect on application
> performance.

I assume you can determine whether or not it's the kernel rather than
ompi/ofed by booting into the old one.