Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] random problems with a ring communication example
From: christophe petit (christophe.petit09_at_[hidden])
Date: 2014-03-15 17:43:29


Hello,

I followed a simple MPI example to do a ring communication.

Here's the figure that illustrates this example with 7 processes :

http://i.imgur.com/Wrd6acv.png

Here the code :

--------------------------------------------------------------------------------------------------------------------------
 program ring

 implicit none
 include 'mpif.h'

 integer, dimension( MPI_STATUS_SIZE ) :: status
 integer, parameter :: tag=100
 integer :: nb_procs, rank, value, &
            num_proc_previous,num_proc_next,code

 call MPI_INIT (code)
 call MPI_COMM_SIZE ( MPI_COMM_WORLD ,nb_procs,code)
 call MPI_COMM_RANK ( MPI_COMM_WORLD ,rank,code)

 num_proc_next=mod(rank+1,nb_procs)
 num_proc_previous=mod(nb_procs+rank-1,nb_procs)

 if (rank == 0) then
    call MPI_SEND (1000,1, MPI_INTEGER ,num_proc_next,tag, &
                   MPI_COMM_WORLD ,code)
    call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
                   MPI_COMM_WORLD ,status,code)
 else
    call MPI_RECV (value,1, MPI_INTEGER ,num_proc_previous,tag, &
                   MPI_COMM_WORLD ,status,code)
    call MPI_SEND (rank+1000,1, MPI_INTEGER ,num_proc_next,tag, &
                   MPI_COMM_WORLD ,code)
 end if
 print *,'Me, process ',rank,', I have received ',value,' from process
',num_proc_previous

 call MPI_FINALIZE (code)
end program ring

--------------------------------------------------------------------------------------------------------------------------

At the execution, I expect to always have :

Me, process 1 , I have received 1000 from
process 0
 Me, process 2 , I have received 1001 from
process 1
 Me, process 3 , I have received 1002 from
process 2
 Me, process 4 , I have received 1003 from
process 3
 Me, process 5 , I have received 1004 from
process 4
 Me, process 6 , I have received 1005 from
process 5
 Me, process 0 , I have received 1006 from
process 6

But sometimes, I have the reception of process 0 from process 6 which is
not the last reception, like this :

 Me, process 1 , I have received 1000 from
process 0
 Me, process 2 , I have received 1001 from
process 1
 Me, process 3 , I have received 1002 from
process 2
 Me, process 4 , I have received 1003 from
process 3
 Me, process 5 , I have received 1004 from
process 4
 Me, process 0 , I have received 1006 from
process 6
 Me, process 6 , I have received 1005 from
process 5

where reception of process 0 from process 6 happens before the reception of
process 6 from process 5

or like on this result :

 Me, process 1 , I have received 1000 from
process 0
 Me, process 2 , I have received 1001 from
process 1
 Me, process 3 , I have received 1002 from
process 2
 Me, process 4 , I have received 1003 from
process 3
 Me, process 0 , I have received 1006 from
process 6
 Me, process 5 , I have received 1004 from
process 4
 Me, process 6 , I have received 1005 from
process 5

where process 0 receives between the reception of process 4 and 5.

How can we explain this strange result ? I thought that standard use of
MPI_SEND and MPI_RECV were blocking by default and,
with this result, it seems to be not blocking.

I tested this example on Debian 7.0 with open-mpi package.

Thanks for your help