Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] numactl with torque cpusets
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-11-21 17:58:29


For the web archives...

Brock and I talked about this in person at SC. The conversation was much more involved than this seemingly-simple question implied. :-)

The short version is:

- numactl does both memory and processor binding
- hwloc is the new numactl :-)
  - e.g., see the hwloc-bind(1) command
- OMPI does both memory and processor binding
- OMPI 1.5.5 will have an MCA parameter for process-wide memory binding policy
- Torque cpusets are probably do what is desired: restrict MPI processes to a subset of the processors on a given server (e.g., if multiple Torque jobs are running on the same server)

On Nov 9, 2011, at 1:46 PM, Brock Palen wrote:

> Question,
> If we are using torque with TM with cpusets enabled for pinning should we not enable numactl? Would they conflict with each other?
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/