Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] question regarding the configuration of multiple nics for openmpi
From: Gus Correa (gus_at_[hidden])
Date: 2008-11-04 11:03:02

Hi Olivier and list

I presume you are talking about Ethernet or GigE.
The basic information on how to launch jobs is on the OpenMPI FAQ pages:

Here is what I did on our toy/test cluster made of salvaged computers.

1) I use ROCKS cluster, which makes some steps more automatic then
described below.
However, ROCKS is not needed for this.

2) I have actually three private networks, but you may use, say, two,
if your motherboards have dual Ethernet (or GigE) ports.
Each node has three NICs, which Linux recognized and activated as eth0,
eth1, eth2.

Make sure you and Linux agree on which port is eth0, eth1, etc.
This may be a bit tricky, the kernel seems to have its own wisdom and
mood when it assigns the port names.
Ping, lspci, ifconfig, ifup, ifdown, ethtool, are your friends here, and
can help you
sort out the correct port-name map.

3) For a modest number of nodes, less than 8, you can buy inexpensive
SOHO type GigE switches,
one for each network, for about $50 a piece. (This is what I did.)
For more nodes you would need larger switches.
Use Cat5e or Cat6 Ethernet cables and connect the separate networks
using the correct ports on the
nodes and switches.
Well, you may have done that already ...

4) On RHEL or Fedora the essential information is on
on each of your cluster nodes.
Other Linux distributions may have equivalent files.
You need to edit these files to insert the correct IP address, netmask,
and MAC address.

For instance, if you have less than 254 nodes, you can define private
networks like this:
net1) netmask (using the eth0 port)
net2) netmask (using the eth1 port)
net3) netmask (using the eth2 port)

Here is an example:

[node1] $ cat /etc/sysconfig/network-scripts/ifcfg-eth0

HWADDR=(put your eth0 port MAC address here)
IPADDR= ( ... on node2, etc)

[node1] $ cat /etc/sysconfig/network-scripts/ifcfg-eth1

HWADDR=(put your eth1 port MAC address here)
IPADDR= ( ... on node2, etc)

5) To launch the OpenMPI program "mp_prog"
using the (i.e. "eth1") network using, say, 8 processes, do:

mpiexec --mca btl_tcp_if_include eth1 -n 8 my_prog

(Good if your (eth0) network is already used for I/O,
control, etc.)

To be more aggressive, and use both networks, ("eth0") and ("eth1") do:

mpiexec --mca btl_tcp_if_include eth0,eth1 -n 8 my_prog


Works for me.
I hope it helps!

Gus Correa
PS - More answers below.

Gustavo J. Ponce Correa, PhD - Email: gus_at_[hidden]
Lamont-Doherty Earth Observatory - Columbia University
P.O. Box 1000 [61 Route 9W] - Palisades, NY, 10964-8000 - USA
Olivier Marsden wrote:
> Hello,
> I am configuring a cluster with multiple nics for use with open mpi.
> I have not found very much information on the best way of setting up
> my network for open mpi. At the moment I have a pretty standard setup
> with a single hostname and single ip address for each node.
> Could someone advise me on the following points?
> - for each node, should I have the second ip on the same subnet as the 
> first, or not ?
No, use separate subnets.
> - does openmpi need separate hostnames for each ip?
No, same hostname, but different subnets and different IPs for each port 
on a given host.
> If there is a webpage describing how to configure such a network for 
> the best, that
> would be great.
Yes, to some extent.
Look at the OpenMPI FAQ:
> Many thanks,
> Olivier Marsden
> _______________________________________________
> users mailing list
> users_at_[hidden]