Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] MPI_Waitall strange behaviour on remote nodes
From: Richard Bardwell (richard_at_[hidden])
Date: 2012-02-14 10:56:23

In trying to debug an MPI_Waitall hang on a remote
node, I created a simple code to test.

If we run the simple code below on 2 nodes on a local
machine, we send the number 1 and receive number 1 back.

If we run the same code on a local node and a remote node,
we send number 1 but get 32767 back. Any ideas ???

#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"

#define PCPU 8
int rank,nproc;

main(argc, argv)
int argc;
char *argv[];
   int i,j,k,i1;

   MPI_Init(&argc, &argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &rank);
   MPI_Comm_size(MPI_COMM_WORLD, &nproc);

   if (rank==0) {
    i1 = 1;
    printf("R%d: recvd %d\n",rank,k);


int ok;

   int i,j,k,m;
   int tag=201;
   MPI_Request request[PCPU];
   MPI_Status status[PCPU];

   for (m=1;m<nproc;m++) {
    MPI_Isend(&ok, 1, MPI_INT, m, tag+m, MPI_COMM_WORLD,&request[m-1]);



   int i,j,k,m;
   int hrecv;
   int tag=201;
   MPI_Request request[PCPU];
   MPI_Status status[PCPU];

   MPI_Irecv(&hrecv, 1, MPI_INT, 0, tag+rank, MPI_COMM_WORLD, &request[rank-1]);