Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: John Robinson (jr_at_[hidden])
Date: 2006-02-28 09:46:48


Your MPI_Recv is trying to receive from the slave(1), not the master (0).

Jose Pedro Garcia Mahedero wrote:
> Hello everybody.
>
> I'm new to MPI and I'm having some problems while runnig a simple
> pingpong program in more than one node.
>
> 1.- I followed all the instructions and installed open MPI without
> problems in a Beowulf cluster.
> 2.- Ths cluster is working OK and ssh keys are set for not password
> prompting
> 3.- miexec seems to run OK.
> 4.- Now I'm using just 2 nodes: I've tried a simple ping-pong
> application but my master only sends one request!!
> 5.- I reduced the problem by trying to send just two mesages to the same
> node:
>
> int main(int argc, char **argv){
> int myrank;
>
> /* Initialize MPI */
>
> MPI_Init(&argc, &argv);
>
> /* Find out my identity in the default communicator */
>
> MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
> if (myrank == 0) {
> int work = 100;
> int count=0;
> for (int i =0; i < 10; i++){
> cout << "MASTER IS SLEEPING..." << endl;
> sleep(3);
> cout << "MASTER AWAKE WILL SEND["<< count++ << "]:" << work << endl;
> MPI_Send(&work, 1, MPI_INT, 1, WORKTAG, MPI_COMM_WORLD);
> }
> } else {
> int count =0;
> int work;
> MPI_Status status;
> while (true){
> MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD,
> &status);
> cout << "SLAVE[" << myrank << "] RECEIVED[" << count++ << "]:"
> << work <<endl;
> if (status.MPI_TAG == DIETAG) {
> break;
> }
> }// while
> }
> MPI_Finalize();
>
>
>
> 6a.- RESULTS (if I put more than one machine in my mpihostsfile), my
> master sends the first message and my slave receives it perfectly. But
> my master doesnt send its second .
> message:
>
>
>
> Here's my output
>
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[0]:100
> MASTER IS SLEEPING...
> SLAVE[1] RECEIVED[0]:100MPI_STATUS.MPI_ERROR:0
> MASTER AWAKE WILL SEND[1]:100
>
> 6b.- RESULTS (if I put ONLY 1 machine in my mpihostsfile), everything
> is OK until iteration 9!!!
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[0]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[1]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[2]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[3]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[4]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[5]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[6]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[7]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[8]:100
> MASTER IS SLEEPING...
> MASTER AWAKE WILL SEND[9]:100
> SLAVE[1] RECEIVED[0]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[1]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[2]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[3]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[4]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[5]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[6]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[7]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[8]:100MPI_STATUS.MPI_ERROR:0
> SLAVE[1] RECEIVED[9]:100MPI_STATUS.MPI_ERROR:0
> --------------------------------
>
> I know this is a lot of text, but I wanted to give a mamixum detailed
> question. I've been search in FAQ, but still don't know what (and why)
> is going on...
>
> Anyone can help please :-) ?
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users