The only limit that I would think of is the available physical
memory on each NUMA node (numactl -H will tell you how much of each
NUMA node memory is still available).
malloc usually only fails (it returns NULL?) when there no *virtual*
memory anymore, that's different. If you don't allocate tons of
terabytes of virtual memory, this shouldn't happen easily.
Le 05/09/2012 14:27, Gabriele Fatigati a écrit :
Dear Hwloc users and developers,
I'm using hwloc 1.4.1 on a multithreaded program in a Linux
platform, where each thread bind many non contiguos pieces of a
big matrix using in a very intensive way
hwloc_set_area_membind_nodeset(topology, punt+offset, len,
nodeset, HWLOC_MEMBIND_BIND, HWLOC_MEMBIND_THREAD |
Binding seems works well, since the returned code from
function is 0 for every calls.
The problems is that after binding, a simple little new
malloc fails, without any apparent reason.
Disabling memory binding, the allocations works well. Is
there any knows problem if hwloc_set_area_membind_nodeset is
Is there some operating system limit for memory pages
Thanks in advance.
Ing. Gabriele Fatigati
SuperComputing Applications and Innovation Department
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
+39 051 6171722
g.fatigati [AT] cineca.it
hwloc-users mailing list