This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Feb 28, 2011, at 4:18 PM, Brice Goglin wrote:
> Ah good point! So Jeff has to hope that pages of different processes
> won't be highly mixed in the swap partition, good luck :)
This is really a pretty terrible statement we (the Linux community) are making: it's all about manycore these days, and a direct consequence of that is that it's all about NUMA. So you should bind your memory.
But that may not be enough. Binding memory to a location is not binding -- in the sense that it can change under certain circumstances.
The soundbite version of this is: "binding != binding." Terrible. :-(
In many cases, setting a memory policy is probably sufficient to be "sure enough" that your memory will be local. But here's a class of cases where it's not: a multi-threaded application where threads communicate by having a message buffer physically close to a "reader" thread -- the "writer" thread may be far away. A typical scenario is that the writer writes infrequently, but the reader polls frequently. The memory is local to the reader, so it's acceptable.
But if the communication buffer gets swapped out and the writer happens to be the one that touches the memory to get it swapped back in, the message buffer might end up being local to the *writer*, not the *reader*.
For cases like this, it sounds like the only way to be sure that the buffer stays where you want it is to actually pin the memory to the memory location close to the receiver.
So: binding + pinning = binding (as long as you can ensure that the binding + pinning was atomic!).
For corporate legal information go to: