Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Scott Atchley (atchley_at_[hidden])
Date: 2007-01-18 07:08:57


On Jan 18, 2007, at 5:05 AM, Peter Kjellstrom wrote:

> On Thursday 18 January 2007 09:52, Robin Humble wrote:
> ...
>> is ~10Gbit the best I can expect from 4x DDR IB with MPI?
>> some docs @HP suggest up to 16Gbit (data rate) should be possible,
>> and
>> I've heard that 13 or 14 has been achieved before. but those might be
>> verbs numbers, or maybe horsepower >> 4 cores of 2.66GHz core2 is
>> required?
>
> The 16 Gbit/s number is the theoretical peak, IB is coded 8/10 so
> out of the
> 20 Gbit/s 16 is what you get. On SDR this number is (of course) 8
> Gbit/s
> achievable (which is ~1000 MB/s) and I've seen well above 900 on
> MPI (this on
> 8x PCIe, 2x margin).
>
> The same setup on 4x PCIe stops at a bit over 700 MB/s (for a
> certain PCIe
> chipset) so it makes some sense that an IB 4x DDR on 8x PICe would
> be limited
> to about 1500 MB/s (on that platform). All this ignoring possible
> MPI bottle
> necks above 900 MB/s and assuming the IB fabric can reach 95%+ of
> peak on DDR
> as it does on SDR...
>
> /Peter

The best uni-directional performance I have heard of for PCIe 8x IB
DDR is ~1,400 MB/s (11.2 Gb/s) with Lustre, which is about 55% of the
theoretical 20 Gb/s advertised speed. The ~900 MB/s (7.2 Gb/s)
mentioned above is, of course, ~72% of advertised speed. If any IB
folks have any better numbers, please correct me.

The data throughput limit for 8x PCIe is ~12 Gb/s. The theoretical
limit is 16 Gb/s, but each PCIe packet has a whopping 20 byte
overhead. If the adapter uses 64 byte packets, then you see 1/3 of
the throughput go to overhead.

Scott