Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Low Open MPI performance on InfiniBand and shared memory?
From: Andreas Schäfer (gentryx_at_[hidden])
Date: 2010-07-09 08:56:31

On 14:39 Fri 09 Jul , Peter Kjellstrom wrote:
> 8x pci-express gen2 5GT/s should show figures like mine. If it's pci-express
> gen1 or gen2 2.5GT/s or 4x or if the IB only came up with two lanes then 1500
> is expected.

lspci and ibv_devinfo tell me it's PCIe 2.0 x8 and InfiniBand 4x QDR
(active_width 4X, active_speed 10.0 Gbps), so I /should/ be able to
get about twice the throughput of what I'm currently seeing.

Andreas Schäfer
HPC and Grid Computing
Chair of Computer Science 3
Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany
+49 9131 85-27910
PGP/GPG key via keyserver
I'm a bright...
This is Bunny. Copy and paste Bunny into your 
signature to help him gain world domination!

  • application/pgp-signature attachment: stored