Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] Heterogeneous OpenFabrics hardware
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-01-26 14:19:15

The Interop Working Group (IWG) of the OpenFabrics Alliance asked me
to bring a question to the Open MPI user and developer communities: is
anyone interested in having a single MPI job span HCAs or RNICs from
multiple vendors? (pardon the cross-posting, but I did want to ask
each group separately -- because the answers may be different)

The interop testing lab at the University of New Hampshire (
) discovered that most (all?) MPI implementations fail when having a
single MPI job span HCAs from multiple vendors and/or span RNICs from
multiple vendors. I don't remember the exact details (and they may
not be public, anyway), but I'm pretty sure that OMPI failed when used
with QLogic and Mellanox HCAs in a single MPI job. This is fairly
unsurprising, given how we tune Open MPI's use of OpenFabrics-capable
hardware based on an internal Open MPI .ini file.

So my question is: does anyone want/need to support jobs that span
HCAs from multiple vendors and/or RNICs from multiple vendors?

Jeff Squyres
Cisco Systems