Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] compiling openmpi with mixed CISCO infiniband. cardand Mellanox infiniband cards.
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-10-26 15:27:05


On Oct 16, 2009, at 1:55 PM, nam kim wrote:

> Our school has a cluster running over CISCO based Infiniband cards
> and switch.
> Recently, we purchased more computing nods with Mellanox card since
> CISCO stops making IB card anymore.
>

Sorry for the delay in replying; my INBOX has grown totally out of
hand recently. :-(

FWIW, Cisco never made IB HCAs; we simply resold Mellanox HCAs.

> Currently, I use openmpi 1.2.8 compiled with CISCO IB card (SFS-
> HCA-320-A1) with topspin driver. My questions are:
>
> 1. Is it possible to compile 1.3 version with mixed cisco IB and
> mellanox IB (MHRH19-XTC) with open infiniband libries?
>

Do you mean: is it possible to use Open MPI 1.3.x with a recent OFED
distribution across multiple nodes, some of which include Cisco-
branded HCAs and some of which include Mellanox HCAs?

The answer is: most likely, yes. Open MPI doesn't fully support
"heterogeneous" HCAs (e.g., HCAs that would require different MTUs).
But I suspect that your HCAs are all "close enough" that it won't
matter. FWIW, on my 64-node MPI testing cluster at Cisco, I do
similar things -- I have various Cisco and Mellanox HCAs of different
generations and specific capabilities, and Open MPI runs fine.

> 2. Is is possible to compile 1.2.8 with mixed cisco IB and mellanox
> IB, then how?
>

If you can, I'd highly suggest upgrading to the Open MPI v1.3 series.

-- 
Jeff Squyres
jsquyres_at_[hidden]