Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] MPI_CANCEL
From: slimtimmy_at_[hidden]
Date: 2008-04-15 15:14:39


I encountered some problems when using MPI_CANCEL. I call
Request::Cancel followed by Request::Wait to ensure that the request has
been cancelled. However Request::Wait does not return when I send bigger
messages. The following code should reproduce this behaviour:

#include "mpi.h"
#include <iostream>

using namespace std;

enum Tags
{
     TAG_UNMATCHED1,
     TAG_UNMATCHED2
};

int main()
{
     MPI::Init();

     const int rank = MPI::COMM_WORLD.Get_rank();
     const int numProcesses = MPI::COMM_WORLD.Get_size();
     const int masterRank = 0;

     if (rank == masterRank)
     {
         cout << "master" << endl;
         const int numSlaves = numProcesses - 1;
         for(int i = 0; i < numSlaves; ++i)
         {
             const int slaveRank = i + 1;
             int buffer;
             MPI::COMM_WORLD.Recv(&buffer, 1, MPI::INT, slaveRank,
                 TAG_UNMATCHED1);
         }

     }
     else
     {
         cout << "slave " << rank << endl;
         //const int size = 1;
         const int size = 10000;
         int buffer[size];
         MPI::Request request = MPI::COMM_WORLD.Isend(buffer, size,
MPI::INT,
             masterRank, TAG_UNMATCHED2);

         cout << "slave ("<< rank<<"): sent data" << endl;

         request.Cancel();

         cout << "slave ("<< rank<<"): cancel issued" << endl;

         request.Wait();

         cout << "slave ("<< rank<<"): finished" << endl;
     }

     MPI::Finalize();

     return 0;
}

If I set size to 1, everything works as expected, the slave process
finishes execution. However if I use a bigger buffer (in this case
10000) the wait blocks forever. That's the output of the program when
run with two processes:

master
slave 1
slave (1): sent data
slave (1): cancel issued

Have I misinterpreted the standard? Or does Request::Wait block until
the message is delievered?