Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Low Open MPI performance on InfiniBand and shared memory?
From: Peter Kjellstrom (cap_at_[hidden])
Date: 2010-07-09 08:39:10


On Friday 09 July 2010, Andreas Schäfer wrote:
> Thanks, those were good suggestions.
>
> On 11:53 Fri 09 Jul , Peter Kjellstrom wrote:
> > On an E5520 (nehalem) node I get ~5 GB/s ping-pong for >64K sizes.
>
> I just tried a Core i7 system which maxes at 6550 MB/s for the
> ping-pong test.

It makes quite some difference if the ranks end up on the same socket or
different sockets (on an i7 you only have one).

> > On QDR IB on similar nodes I get ~3 GB/s ping-pong for >256K.
>
> I'll try to find a Intel system to repeat the tests. Maybe it's AMD's
> different memory subsystem/cache architecture which is slowing Open
> MPI? Or are my systems just badly configured?

8x pci-express gen2 5GT/s should show figures like mine. If it's pci-express
gen1 or gen2 2.5GT/s or 4x or if the IB only came up with two lanes then 1500
is expected.

/Peter