Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Robin Humble (rjh+openmpi_at_[hidden])
Date: 2007-01-18 04:00:57

argh. attached.


On Thu, Jan 18, 2007 at 03:52:19AM -0500, Robin Humble wrote:
>On Wed, Jan 17, 2007 at 08:55:31AM -0700, Brian W. Barrett wrote:
>>On Jan 17, 2007, at 2:39 AM, Gleb Natapov wrote:
>>> On Wed, Jan 17, 2007 at 04:12:10AM -0500, Robin Humble wrote:
>>>> basically I'm seeing wildly different bandwidths over InfiniBand 4x DDR
>>>> when I use different kernels.
>>> Try to load ib_mthca with tune_pci=1 option on those kernels that are
>>> slow.
>>when an application has high buffer reuse (like NetPIPE), which can
>>be enabled by adding "-mca mpi_leave_pinned 1" to the mpirun command
>thanks! :-)
>tune_pci=1 makes a huge difference at the top end, and
>-mca mpi_leave_pinned 1 adds lots of midrange bandwidth.
>latencies (~4us) and the low end performance are all unchanged.
>see attached for details.
>most curves are for except the last couple (tagged as old)
>which are for 2.6.9-42.0.3.ELsmp and for which tune_pci changes nothing.
>why isn't tune_pci=1 the default I wonder?
>files in /sys/module/ib_mthca/ tell me it's off by default in
>2.6.9-42.0.3.ELsmp, but the results imply that it's on... maybe PCIe
>handling is very different in that kernel.
>is ~10Gbit the best I can expect from 4x DDR IB with MPI?
>some docs @HP suggest up to 16Gbit (data rate) should be possible, and
>I've heard that 13 or 14 has been achieved before. but those might be
>verbs numbers, or maybe horsepower >> 4 cores of 2.66GHz core2 is
>>It would be interesting to know if the bandwidth differences appear
>>when the leave pinned protocol is used. My guess is that they will
>yeah, it definitely makes a difference in the 10kB to 10mB range.
>at around 100kB there's 2x the bandwidth when using pinned.
>thanks again!
>> Brian Barrett
>> Open MPI Team, CCS-1
>> Los Alamos National Laboratory
>how's OpenMPI on Cell? :)
>users mailing list