Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] Heterogeneous OpenFabrics hardware
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-01-26 16:46:35


On Jan 26, 2009, at 4:33 PM, Nifty Tom Mitchell wrote:

> I suspect the most common transport would be TCP/IP and that would
> introduce
> gateway and routing issues between quick fabrics and other quick
> fabrics
> that would be intolerable for most HPC applications (but not all).
>
> It may be that IPoIB would be a sufficient communication layer for
> Infiniband
> fabrics but would not address Myrinet or GigE+ links. Gateways and
> bridges would have to come to the party.

I think the prevalent attitude would be: "if you have a low latency
network, why hobble yourself with IP over <native>?"

> On this point...
>>> but I'm pretty sure that OMPI failed when used with QLogic and
>>> Mellanox HCAs in a single MPI job. This is fairly unsurprising,
>>> given
> if OMPI was compiled to use the QLogic PSM layer then it would
> interoperate best with PSM capable hardware. Since QLogic sells
> multiple
> HCAs including Mellanox design HCAs it is incorrect to make a blanket
> statement that QLogic HCAs do not inter-operate with Mellanox.

Note that I did not say that. I specifically stated that OMPI failed
and it is due to the fact that we are customizing for the individual
hardware devices. To be clear: this is an OMPI issue. I'm asking (at
the request of the IWG) if anyone cares about fixing it.

-- 
Jeff Squyres
Cisco Systems