Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] shared-memory allocations
From: Patrick Geoffray (patrick_at_[hidden])
Date: 2008-12-13 16:35:42


Richard Graham wrote:
> Yes - it is polling volatile memory, so has to load from memory on every
> read.

Actually, it will poll in cache, and only load from memory when the
cache coherency protocol invalidates the cache line. Volatile semantic
only prevents compiler optimizations.

It does not matter much where the pages are (closer to reader or
receiver) on NUMAs, as long as they are equally distributed among all
sockets (ie the choice is consistent). Cache prefetching is slightly
more efficient on local socket, so closer to reader may be a bit better.

Patrick