Le 28/02/2011 22:30, Jeff Squyres a écrit :
> This is really a pretty terrible statement we (the Linux community) are making: it's all about manycore these days, and a direct consequence of that is that it's all about NUMA. So you should bind your memory.
> But that may not be enough. Binding memory to a location is not binding -- in the sense that it can change under certain circumstances.
> The soundbite version of this is: "binding != binding." Terrible. :-(
> In many cases, setting a memory policy is probably sufficient to be "sure enough" that your memory will be local. But here's a class of cases where it's not: a multi-threaded application where threads communicate by having a message buffer physically close to a "reader" thread -- the "writer" thread may be far away. A typical scenario is that the writer writes infrequently, but the reader polls frequently. The memory is local to the reader, so it's acceptable.
> But if the communication buffer gets swapped out and the writer happens to be the one that touches the memory to get it swapped back in, the message buffer might end up being local to the *writer*, not the *reader*.
> For cases like this, it sounds like the only way to be sure that the buffer stays where you want it is to actually pin the memory to the memory location close to the receiver.
> So: binding + pinning = binding (as long as you can ensure that the binding + pinning was atomic!).
If the application swaps for real, do you really care about NUMA
locality ? It seems to me that the overhead of accessing distant NUMA
memory may be negligible against the cost of swapping.
Try to make sure you have enough memory for your program first. Then
you'll look at fixing these misplaced pages.
By the way, doing set_area_membind() with HWLOC_MEMBIND_MIGRATE from
time to time may move your pages back to where they belong.