Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Marcin Skoczylas (Marcin.Skoczylas_at_[hidden])
Date: 2007-06-12 06:44:23


Hi,

Administrators changed our cluster network topology, and now it has
narrowly-scoped netmasks for internal and outside part of the cluster.
Of course my soft stopped working giving an error during MPI_Init, then
I checked in the FAQ:

How does Open MPI know which TCP addresses are routable to each other?

These rules do /not/ cover the following cases:

    * Running an MPI job that spans public and private networks
    * Running an MPI job that spans a bunch of private networks with
      narrowly-scoped netmasks, such as nodes that have IP addresses
      192.168.1.10 and 192.168.2.10 with netmasks of 255.255.255.0
      (i.e., the network fabric makes these two nodes be routable to
      each other, even though the netmask implies that they are on
      different subnets).

So, it seems they changed network topology to unsupported configuration
here... is there any walk-around of this situation?

          greetings, Marcin