Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI Persistent Communication Question
From: Eugene Loh (eugene.loh_at_[hidden])
Date: 2010-06-28 12:22:56


amjad ali wrote:
Hi   Jeff S.
Thank you very much for your reply.
I am still feeling some confusion. Please guide.

 The idea is to do this:

   MPI_Recv_init()
   MPI_Send_init()
   for (i = 0; i < 1000; ++i) {
       MPI_Startall()
       /* do whatever */
       MPI_Waitall()
   }
   for (i = 0; i < 1000; ++i) {
       MPI_Request_free()
   }

So in your inner loop, you just call MPI_Startall() and a corresponding MPI_Test* / MPI_Wait* call to complete those requests.

The idea is that the MPI_*_init() functions do some one-time setup on the requests and then you just start and complete those same requests over and over and over.  When you're done, you free them.

Actually in my code what I was doing is:
Okay, something like this:

program main
  do i = 1, 10000
    call sub1()
  end do
end

subroutine sub1()
  do loopa = 1, 3
    call sub2()
  end do
end

subroutine sub2()
  u = ...
  call MPI_Irecv() ! for each neighbor
  call MPI_Isend() ! for each neighbor
  ! perform work that could be done with local data
  call MPI_Waitall()
  ! perform work using the received data
end

I find that "pseudocode" easier to read and understand.
I assume that the above setup will overlap computation with communication (hiding communication behind computations), as well.
A little, but MPI does not guarantee that a non-blocking operation will make progress if you don't make any MPI calls.  So, another alternative is to break the "perform work that could be done with local data" part up into smaller pieces and inserting MPI_Test() calls.  You'd have to play around with this to see a performance improvement, if any.
Now intention is to use persistent communication to get more efficiency. I am facing confusion how to use your proposed model for my work. Please suggest.
You would break the MPI_Irecv and MPI_Isend calls up into two parts:  MPI_Send_init and MPI_Recv_init in the first part and MPI_Start[all] in the second part.  The first part needs to be moved out of the subroutine... at least outside of the loop in sub1() and maybe even outside the 10000-iteration loop in the main program.  (There would also be MPI_Request_free calls that would similarly have to be moved out.)  If the overheads are small compared to the other work you're doing per message, the savings would be small.  (And, I'm guessing this is the case for you.)  Further, the code refactoring might not be simple.  So, persistent communications *might* not be a fruitful optimization strategy for you.  Just a warning.