Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] gfortran, gcc4.2, openmpi 1.3.3, fortran compile errors
From: Jayanta Roy (jay.roys_at_[hidden])
Date: 2009-08-25 12:10:55


I am using Openmpi (version 1.2.2) for MPI data transfer using non-blocking
MPI calls like MPI_Isend, MPI_Irecv etc. I am using "--mca
btl_tcp_if_include eth0,eth1" to use both the eth link for data transfer
within 48 nodes. Now I have added eth2 and eth3 links on the 32 compute
nodes. My aim is to share the high speed data within 32 compute nodes
through eth2 and eth3. But I can't include this as part of "mca" as the rest
of 16 nodes do not have these additional interfaces. In MPI/Openmp can one
specify explicit routing table within a set of nodes? Such that I can edit
/etc/host for additional hostname with these new interfaces and add these
hosts in the mpi hostfile.