Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] Test OpenMPI on a cluster
From: Tim (timlee126_at_[hidden])
Date: 2010-01-30 21:45:57


Hi,
      
I am learning MPI on a cluster. Here is one simple example. I expect the output would show response from different nodes, but they all respond from the same node node062. I just wonder why and how I can actually get report from different nodes to show MPI actually distributes processes to different nodes? Thanks and regards!
      
ex1.c
      
    /* test of MPI */
    #include "mpi.h"
    #include <stdio.h>
    #include <string.h>
      
    int main(int argc, char **argv)
    {
    char idstr[2232]; char buff[22128];
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int numprocs; int myid; int i; int namelen;
    MPI_Status stat;
      
    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
    MPI_Get_processor_name(processor_name, &namelen);
      
    if(myid == 0)
    {
      printf("WE have %d processors\n", numprocs);
      for(i=1;i<numprocs;i++)
      {
        sprintf(buff, "Hello %d", i);
        MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }
        for(i=1;i<numprocs;i++)
        {
          MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);
          printf("%s\n", buff);
        }
    }
    else
    {
      MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);
      sprintf(idstr, " Processor %d at node %s ", myid, processor_name);
      strcat(buff, idstr);
      strcat(buff, "reporting for duty\n");
      MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
    }
    MPI_Finalize();
      
    }
      
ex1.pbs
      
    #!/bin/sh
    #
    #This is an example script example.sh
    #
    #These commands set up the Grid Environment for your job:
    #PBS -N ex1
    #PBS -l nodes=10:ppn=1,walltime=1:10:00
    #PBS -q dque
      
    # export OMP_NUM_THREADS=4
      
     mpirun -np 10 /home/tim/courses/MPI/examples/ex1
      
compile and run:

    [tim_at_user1 examples]$ mpicc ./ex1.c -o ex1
    [tim_at_user1 examples]$ qsub ex1.pbs
    35540.mgt
    [tim_at_user1 examples]$ nano ex1.o35540
    ----------------------------------------
    Begin PBS Prologue Sat Jan 30 21:28:03 EST 2010 1264904883
    Job ID: 35540.mgt
    Username: tim
    Group: Brown
    Nodes: node062 node063 node169 node170 node171 node172 node174 node175
    node176 node177
    End PBS Prologue Sat Jan 30 21:28:03 EST 2010 1264904883
    ----------------------------------------
    WE have 10 processors
    Hello 1 Processor 1 at node node062 reporting for duty
      
    Hello 2 Processor 2 at node node062 reporting for duty
      
    Hello 3 Processor 3 at node node062 reporting for duty
      
    Hello 4 Processor 4 at node node062 reporting for duty
      
    Hello 5 Processor 5 at node node062 reporting for duty
      
    Hello 6 Processor 6 at node node062 reporting for duty
      
    Hello 7 Processor 7 at node node062 reporting for duty
      
    Hello 8 Processor 8 at node node062 reporting for duty
      
    Hello 9 Processor 9 at node node062 reporting for duty
      
    ----------------------------------------
    Begin PBS Epilogue Sat Jan 30 21:28:11 EST 2010 1264904891
    Job ID: 35540.mgt
    Username: tim
    Group: Brown
    Job Name: ex1
    Session: 15533
    Limits: neednodes=10:ppn=1,nodes=10:ppn=1,walltime=01:10:00
    Resources: cput=00:00:00,mem=420kb,vmem=8216kb,walltime=00:00:03
    Queue: dque
    Account:
    Nodes: node062 node063 node169 node170 node171 node172 node174 node175 node176
    node177
    Killing leftovers...
      
    End PBS Epilogue Sat Jan 30 21:28:11 EST 2010 1264904891
    ----------------------------------------