Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] kernel 2.6.23 vs 2.6.24 - communication/wait times
From: Rainer Keller (keller_at_[hidden])
Date: 2010-04-06 11:11:51

Hello Oliver,
Hmm, this is really a teaser...
I haven't seen such a drastic behavior, and haven't read of any on the list.

One thing however, that might interfere is process binding.
Could You make sure, that processes are not bound to cores (default in 1.4.1):
with mpirun --bind-to-none

Just an idea...


On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
> Hello Devel-List,
> I am a little bit helpless about this matter. I already posted in the
> user list. In case you don't read the users list, I post in here.
> This is the original posting:
> Short:
> Switching from kernel 2.6.23 to 2.6.24 (and up), using openmpi 1.2.7-rc2
> (I know outdated, but in debian stable, and same results with 1.4.1)
> increases communication times between processes (essentially between one
> master and several slave processes). This is regardless of whether the
> processes are local only or communication is over ethernet.
> Did anybody witness such a behavior?
> Ideas what should I test for?
> What additional information should I provide for you?
> Thanks for your time
> oli

Rainer Keller, PhD                  Tel: +1 (865) 241-6293
Oak Ridge National Lab          Fax: +1 (865) 241-4811
PO Box 2008 MS 6164           Email: keller_at_[hidden]
Oak Ridge, TN 37831-2008    AIM/Skype: rusraink