Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] compiling openmpi with mixed CISCO infiniband.cardand Mellanox infiniband cards.
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2009-10-28 16:09:10


On Oct 28, 2009, at 1:08 PM, nam kim wrote:

> Head node and other computing nodes have topspin-ib-rhel4-3.2.0-118
> installed with CISCO IB card (HCA-320-A1).
>

Is there a reason you're not using OFED? OFED is *much* more modern
and has many more features than the old Cisco/Topspin IB driver
stack. I don't remember when the last Cisco IB stack release was, but
I think it was (literally) years ago. We put all of our development
effort into OFED quite a while ago.

Additionally, Open MPI removed support for the old MVAPI-style IB
stacks (including the Cisco/Topspin stack) starting with Open MPI
v1.3. So if you stick with the old stack, you're pretty much limited
to Open MPI v1.2.x.

> Our new nodes has mellanox IB card (MHRH19-XTC). My question is how
> to compile openmpi with heterogenous IB cards?
>

I'm afraid I don't know the subtle differences between those two
cards. Most of Open MPI's HCA determination and adjusting is done at
run time, not compile time. My advice would be to upgrade to the
latest stable OFED release and the latest stable Open MPI release.
Then try running it and see what happens. It will "probably" work
just fine. If not, we can tweak some run-time parameters to force
Open MPI to use the same settings on all your HCAs and then it will
work.

Does that make sense?

-- 
Jeff Squyres
jsquyres_at_[hidden]