Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-02-12 12:27:06


On Feb 12, 2007, at 12:54 PM, Matteo Guglielmi wrote:

> This is the ifconfig output from the machine I'm used to submit the
> parallel job:

It looks like both of your nodes share an IP address:

> [root_at_lcbcpc02 ~]# ifconfig
> eth1 Link encap:Ethernet HWaddr 00:15:17:10:53:C9
> inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:
> 255.255.255.0
> [root_at_lcbcpc04 ~]# ifconfig
> eth1 Link encap:Ethernet HWaddr 00:15:17:10:53:75
> inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:
> 255.255.255.0

This will be problematic to more than just OMPI if these two
interfaces are on the same network. The solution is to ensure that
all your nodes have unique IP addresses.

If these NICs are on different networks, than it's a valid network
configuration, but Open MPI (by default) will assume that these are
routable to each other. You can tell Open MPI to not use eth1 in
this case -- see this FAQ entries for details:

   http://www.open-mpi.org/faq/?category=tcp#tcp-multi-network
   http://www.open-mpi.org/faq/?category=tcp#tcp-selection
   http://www.open-mpi.org/faq/?category=tcp#tcp-routability

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems