Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI Persistent Communication Question
From: amjad ali (amjad11_at_[hidden])
Date: 2010-06-30 09:37:10

and it's conceivable that you might have better performance with
> DO I = 1, N
> call do_a_little_of_my_work() ! no MPI progress is being made here
> CALL MPI_TEST() ! enough MPI progress is being made here
> that the receiver has something to do
> Whether performance improves or not is not guaranteed by the MPI standard.
> And the SECOND desire is to use Persistent communication for even better
> speedup.
> Right. That's a separate issue.

So actually I am focusing on the persistent communication at this time.
Based on your suggestions, I developed:

sending, receiving buffers, and the request array is defined in declared in
the global module. And their sizes are allocated in the main program. But
following is not working. Segmentation fault messages at just from the
underline blue line lace.

*Main program starts------@@@@@@@@@@@@@@@@@@@@@@@.*
**CALL MPI_RECV_INIT for each neighboring process **
CALL MPI_SEND_INIT for each neighboring process*
*Loop Calling the subroutine1--------------------(10000 times in the main

** Call subroutine1*
**Subroutine1 starts===================================*
   Loop A starts here >>>>>>>>>>>>>>>>>>>> (three passes)
   Call subroutine2

   Subroutine2 starts----------------------------

         Pick local data from array U in separate arrays for each
neighboring processor
         -------perform work that could be done with local data
         CALL MPI_WAITALL( )
         -------perform work using the received data
   Subroutine**2** ends**----------------------------*

* -------perform work to update array U*
* Loop A ends here >>>>>>>>>>>>>>>>>>>>*
*Subroutine1 ends====================================*

*Loop Calling the subroutine1 ends------------(10000 times in the main

*CALL MPI_Request_free( )*

*Main program ends------@@@@@@@@@@@@@@@@@@@@@@@.*

How to tackle all this.