On Oct 1, 2010, at 3:24 PM, Gijsbert Wiesenekker wrote:
> I have a large array that is shared between two processes. One process updates array elements randomly, the other process reads array elements randomly. Most of the time these writes and reads do not overlap.
> The current version of the code uses Linux shared memory with NSEMS semaphores. When array element i has to be read or updated semaphore (i % NSEMS) is used. if NSEMS = 1 the entire array will be locked which leads to unnecessary waits because reads and writes do not overlap most of the time. Performance increases as NSEMS increases, and flattens out at NSEMS = 32, at which point the code runs twice as fast when compared to NSEMS = 1.
> I want to change the code to use OpenMPI RMA, but MPI_Win_lock locks the entire array, which is similar to NSEMS = 1. Is there a way to have more granular locks?
The MPI standard defines MPI_WIN_LOCK as protecting the entire window, so the short answer to your question is no. Depending on your application, it may be possible to have multiple windows independent pieces of the data to get the behavior you want, but that does seem icky.
Brian W. Barrett
Dept. 1423: Scalable System Software
Sandia National Laboratories