jsquyres_at_[hidden], le Thu 06 Jan 2011 19:44:51 +0100, a écrit :
> * \code
> - * hwloc_alloc_membind_policy(topology, size, set, HWLOC_MEMBIND_DEFAULT, 0),
> + * hwloc_alloc_membind_policy(topology, size, set,
> + * HWLOC_MEMBIND_DEFAULT, 0);
> * \endcode
> - * which will try to allocate new data bound to the given set, possibly by
> - * changing the current memory binding policy, or at worse allocate memory
> - * without binding it at all. Since HWLOC_MEMBIND_STRICT is not given, this
> - * will even not fail unless a mere malloc() itself would fail, i.e. ENOMEM.
> - *
> - * Each binding is available with a CPU set argument or a NUMA memory node set
> - * argument. The name of the latter ends with _nodeset. It is also possible to
> - * convert between CPU set and node set using ::hwloc_cpuset_to_nodeset or
> - * ::hwloc_cpuset_from_nodeset.
> + * Setting this policy will cause the OS to try to bind all new memory
> + * allocations to the specified set. Some operating systems will
> + * dutifully change the current memory binding policy, but others will
> + * simply ignore the policy (i.e., not bind new memory allocations at
> + * all). Note that since HWLOC_MEMBIND_STRICT was not specified,
> + * failures to bind will not be reported -- generally, only memory
> + * allocation failures will be reported (e.g., even a plain malloc()
> + * would have failed with ENOMEM).
This is not what I meant: hwloc_alloc_membind_policy's purpose is only
to allocate bound memory. It happens that hwloc_alloc_membind_policy
_may_ change the process policy in order to be able to bind memory
at all (when the underlying OS does not have a directed allocation
primitive), but that's not necessary. If hwloc can simply call a
directed allocation primitive, it will do it. If the OS doesn't support
binding at all, then hwloc will just allocate memory.
I'm not sure whether I should rephrase myself (which my just result to
the same as what I had written previously) or let you rephrase it.
> + HWLOC_MEMBIND_INTERLEAVE = 3, /**< \brief Allocate memory on
> + * the given nodes in an
> + * interleaved / round-robin
> + * manner. The precise layout
> + * of the memory across
> + * multiple NUMA nodes is
> + * OS/system specific.
> + * Interleaving can be useful
> + * when multiple threads from
> + * the specified NUMA nodes
> + * will be effectively
> + * splitting the memory
> + * amongst themselves.
This is not really correct: if the threads were splitting the memory
amongst themselves, FIRSTTOUCH should be used instead, to migrate pages
close to where they are referenced from. I have rephrased that
The rest is OK, thanks for your efforts!