Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] problem with alltoall with ppn=8
From: Daniël Mantione (daniel.mantione_at_[hidden])
Date: 2008-08-16 05:53:32

On Fri, 15 Aug 2008, Kozin, I \(Igor\) wrote:

> Hello, I would really appreciate any advice on troubleshooting/tuning
> Open MPI over ConnectX. More details about our setup can be found here
> Single process per node (ppn=1) seems to be fine (the results for IMB
> can be found here

This behaviour happens on all Mellanox devices. I have been in contact
with them to in get more information about it, but never got a convincing
answer. According to Mellanox it happened because my Infinihost III Lx was
not designed for a system with 8 cores in total, it should not happen with
Infinihost III Ex or ConnectX.

As you noticed, it also happens with ConnectX, and I can confirm it also
happens on systems with only 4 cores total. However, the problem is less
severe with ConnectX than with Infinihost III.

The solution for me has been to sell QLogic rather than Mellanox to
customers who need good AllToAll.