Hello Gabriele,

The only limit that I would think of is the available physical memory on each NUMA node (numactl -H will tell you how much of each NUMA node memory is still available).
malloc usually only fails (it returns NULL?) when there no *virtual* memory anymore, that's different. If you don't allocate tons of terabytes of virtual memory, this shouldn't happen easily.

Brice




Le 05/09/2012 14:27, Gabriele Fatigati a écrit :
Dear Hwloc users and developers,


I'm using hwloc 1.4.1 on a multithreaded program in a Linux platform, where each thread bind many non contiguos pieces of a big matrix using in a very intensive way hwloc_set_area_membind_nodeset function:

hwloc_set_area_membind_nodeset(topology, punt+offset, len, nodeset, HWLOC_MEMBIND_BIND, HWLOC_MEMBIND_THREAD | HWLOC_MEMBIND_MIGRATE);

Binding seems works well, since the returned code from function is 0 for every calls.

The problems is that after binding, a simple little new malloc fails, without any apparent reason.

Disabling memory binding, the allocations works well.  Is there any knows problem if  hwloc_set_area_membind_nodeset is used intensively?

Is there some operating system limit for memory pages binding?

Thanks in advance.

--
Ing. Gabriele Fatigati

HPC specialist

SuperComputing Applications and Innovation Department

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatigati [AT] cineca.it          


_______________________________________________
hwloc-users mailing list
hwloc-users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-users