Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] compiling openmpi with mixed CISCO infiniband. cardand Mellanox infiniband cards.
From: nam kim (namkkim_at_[hidden])
Date: 2009-10-28 13:08:07


Jeff,

Thank you for your reply!

Further question,

Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118
installed with CISCO IB card (HCA-320-A1).
Our new nodes has mellanox IB card (MHRH19-XTC). My question is how
to compile openmpi with heterogenous IB cards?

I used to compile with --with-mvapi=/usr/local/topspin

Thanks
-Nam

On Mon, Oct 26, 2009 at 12:27 PM, Jeff Squyres <jsquyres_at_[hidden]> wrote:
> On Oct 16, 2009, at 1:55 PM, nam kim wrote:
>
>> Our school has a cluster running over CISCO based Infiniband cards and
>> switch.
>> Recently, we purchased more computing nods with Mellanox card since
>> CISCO stops making IB card anymore.
>>
>
> Sorry for the delay in replying; my INBOX has grown totally out of hand
> recently.  :-(
>
> FWIW, Cisco never made IB HCAs; we simply resold Mellanox HCAs.
>
>> Currently, I use openmpi 1.2.8 compiled with CISCO IB card
>> (SFS-HCA-320-A1) with topspin driver. My questions are:
>>
>> 1. Is it possible to compile 1.3 version with mixed cisco IB and mellanox
>> IB (MHRH19-XTC) with open infiniband libries?
>>
>
> Do you mean: is it possible to use Open MPI 1.3.x with a recent OFED
> distribution across multiple nodes, some of which include Cisco-branded HCAs
> and some of which include Mellanox HCAs?
>
> The answer is: most likely, yes.  Open MPI doesn't fully support
> "heterogeneous" HCAs (e.g., HCAs that would require different MTUs).  But I
> suspect that your HCAs are all "close enough" that it won't matter.  FWIW,
> on my 64-node MPI testing cluster at Cisco, I do similar things -- I have
> various Cisco and Mellanox HCAs of different generations and specific
> capabilities, and Open MPI runs fine.
>
>> 2. Is is possible to compile 1.2.8 with mixed cisco IB and mellanox IB,
>> then how?
>>
>
>
> If you can, I'd highly suggest upgrading to the Open MPI v1.3 series.
>
> --
> Jeff Squyres
> jsquyres_at_[hidden]
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>