Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] InfiniBand, different OpenFabrics transport types
From: Bill Johnstone (beejstone3_at_[hidden])
Date: 2011-06-28 13:46:00


Hello all.

I have a heterogeneous network of InfiniBand-equipped hosts which are all connected to the same backbone switch, an older SDR 10 Gb/s unit.

One set of nodes uses the Mellanox "ib_mthca" driver, while the other uses the "mlx4" driver.

This is on Linux 2.6.32, with Open MPI 1.5.3 .

When I run Open MPI across these node types, I get an error message of the form:

Open MPI detected two different OpenFabrics transport types in the same Infiniband network.
Such mixed network trasport configuration is not supported by Open MPI.

Local host: compute-chassis-1-node-01
Local adapter: mthca0 (vendor 0x5ad, part ID 25208)
Local transport type: MCA_BTL_OPENIB_TRANSPORT_UNKNOWN

Remote host: compute-chassis-3-node-01
Remote Adapter: (vendor 0x2c9, part ID 26428)
Remote transport type: MCA_BTL_OPENIB_TRANSPORT_IB

Two questions:

1. Why is this occurring if both adapters have all the OpenIB software set up?  Is it because Open MPI is trying to use functionality such as ConnectX with the newer hardware, which is incompatible with older hardware, or is it something more mundane?

2. How can I use IB amongst these heterogeneous nodes?

Thank you.