Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Heterogeneous OpenFabrics hardware
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-01-26 15:03:02


This scenario was not mentioned, but I'll bet it falls into the same
general category. If an HCA has different run-time characteristics,
regardless of whether they are caused by the OEM or the reseller,
that's probably "heterogeneous enough" for this discussion.

On Jan 26, 2009, at 2:41 PM, Don Kerr wrote:

> Jeff,
>
> Did IWG say anything about there being a chip set issue? Example
> what if a vender, say Sun, wraps Mellanox chips and on its own
> HCAs, would Mellanox HCA and Sun HCA work together?
>
> -DON
>
> On 01/26/09 14:19, Jeff Squyres wrote:
>> The Interop Working Group (IWG) of the OpenFabrics Alliance asked
>> me to bring a question to the Open MPI user and developer
>> communities: is anyone interested in having a single MPI job span
>> HCAs or RNICs from multiple vendors? (pardon the cross-posting,
>> but I did want to ask each group separately -- because the answers
>> may be different)
>>
>> The interop testing lab at the University of New Hampshire (http://www.iol.unh.edu/services/testing/ofa/
>> ) discovered that most (all?) MPI implementations fail when having
>> a single MPI job span HCAs from multiple vendors and/or span RNICs
>> from multiple vendors. I don't remember the exact details (and
>> they may not be public, anyway), but I'm pretty sure that OMPI
>> failed when used with QLogic and Mellanox HCAs in a single MPI
>> job. This is fairly unsurprising, given how we tune Open MPI's use
>> of OpenFabrics-capable hardware based on our .ini file.
>>
>> So my question is: does anyone want/need to support jobs that span
>> HCAs from multiple vendors and/or RNICs from multiple vendors?
>>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
Cisco Systems