Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] MPI_Send and MPI_Recv not working
From: ankur pachauri (ankurpachauri_at_[hidden])
Date: 2009-10-10 04:39:05


i have openmpi 1.3.3 installed on my linux fedora 10 system, i have a
cluster of two nodes node0(ip 10.1.7.125) and node1(ip 10.1.7.138) among
them passwordless ssh is set and a directory is nfs mounted

when i run a simple test code without and MPI_Send or MPI_Recv it works for
any number of process with the command
mpirun -np 2 --hostfile host a.out
--------------------------------------------------------------------------
  #include "mpi.h"
   #include <stdio.h>

   int main(argc,argv)
   int argc;
   char *argv[]; {
   int numtasks, rank, rc;
   int x;
   rc = MPI_Init(&argc,&argv);
   if (rc != MPI_SUCCESS) {
     printf ("Error starting MPI program. Terminating.\n");
     MPI_Abort(MPI_COMM_WORLD, rc);
     }

   MPI_Comm_size(MPI_COMM_WORLD,&numtasks);
   MPI_Comm_rank(MPI_COMM_WORLD,&rank);
   printf ("\nNumber of tasks= %d \t My rank= %d", numtasks,rank);

   /******* do some work *******/

   if(rank == 0)
    {
        printf("\t This is primary");
        x = 9;
        }
    else
        x = 1;
   printf("\t%d\n",x);
   MPI_Finalize();
   }
--------------------------------------------------------------------------
but when i run another code with MPI_Send or MPI_Recv
it give the follwing error

[node0][[17948,1],0][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:638:mca_btl_tcp_endpoint_complete_connect]
connect() to 10.1.7.138 failed: No route to host (113)
^Cmpirun: killing job...

mpirun was unable to cleanly terminate the daemons on the nodes shown
below. Additional manual cleanup may be required - please refer to
the "orte-clean" tool for assistance.

    node1
--------------------------------------------------------------------------
#include "mpi.h"
#include "string.h"
#include "stdio.h"
main( argc, argv )
int argc;
char **argv;
{
    char message[20];
    int myrank;
    MPI_Status status;
    MPI_Init( &argc, &argv );
    MPI_Comm_rank( MPI_COMM_WORLD, &myrank );
    if (myrank == 0) /* code for process zero */
    {
        strcpy(message,"Hello, there");
        MPI_Send(message, strlen(message)+1, MPI_CHAR, 1, 99,
MPI_COMM_WORLD);
    }
    else if (myrank == 1) /* code for process one */
    {
        MPI_Recv(message, 20, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status);
        printf("received :%s:\n", message);
    }
    MPI_Finalize();
}
--------------------------------------------------------------------------

please help

regards

-- 
Ankur Pachauri.
Research Scholar,
software engineering.
Department of Mathematics
Dayalbagh Educational Institute
Dayalbagh,
AGRA