This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
and it's conceivable that you might have better performance with
> CALL MPI_ISEND()
> DO I = 1, N
> call do_a_little_of_my_work() ! no MPI progress is being made here
> CALL MPI_TEST() ! enough MPI progress is being made here
> that the receiver has something to do
> END DO
> CALL MPI_WAIT()
> Whether performance improves or not is not guaranteed by the MPI standard.
> And the SECOND desire is to use Persistent communication for even better
> Right. That's a separate issue.
So actually I am focusing on the persistent communication at this time.
Based on your suggestions, I developed:
sending, receiving buffers, and the request array is defined in declared in
the global module. And their sizes are allocated in the main program. But
following is not working. Segmentation fault messages at just from the
underline blue line lace.
*Main program starts------@@@@@@@@@@@@@@@@@@@@@@@.*
**CALL MPI_RECV_INIT for each neighboring process **
CALL MPI_SEND_INIT for each neighboring process*
*Loop Calling the subroutine1--------------------(10000 times in the main
** Call subroutine1*
Loop A starts here >>>>>>>>>>>>>>>>>>>> (three passes)
Pick local data from array U in separate arrays for each
-------perform work that could be done with local data
CALL MPI_WAITALL( )
-------perform work using the received data
* -------perform work to update array U*
* Loop A ends here >>>>>>>>>>>>>>>>>>>>*
*Loop Calling the subroutine1 ends------------(10000 times in the main
*CALL MPI_Request_free( )*
*Main program ends------@@@@@@@@@@@@@@@@@@@@@@@.*
How to tackle all this.