Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-10-18 08:31:15


On Oct 18, 2007, at 7:56 AM, Gleb Natapov wrote:

>> Open MPI v1.2.4 (and newer) will get around 1.5us latency with 0 byte
>> ping-pong benchmarks on Mellanox ConnectX HCAs. Prior versions of
>> Open MPI can also achieve this low latency by setting the
>> btl_openib_use_eager_rdma MCA parameter to 1.
>
> Actually setting btl_openib_use_eager_rdma to 1 will not help. The
> reason is that it is 1 by default anyway, but Open MPI disables eager
> rdma because it can't find HCA description in the ini file and cannot
> distinguish between default value and value that user set explicitly.

Arrgh; that's a fun (read: annoying) bug. Well, it's not a total
loss -- you can still get the same performance in older Open MPI
versions by adding the following to the end of the $prefix/share/
openmpi/mca-btl-openib-hca-params.ini file:

[Mellanox Hermon]
vendor_id = 0x2c9,0x5ad,0x66a,0x8f1,0x1708
vendor_part_id = 25408,25418,25428
use_eager_rdma = 1
mtu = 2048

-- 
Jeff Squyres
Cisco Systems