Open MPI logo

Hardware Locality Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Hardware Locality Development mailing list

Subject: Re: [hwloc-devel] Cgroup resource limits
From: Ralph Castain (rhc_at_[hidden])
Date: 2012-11-02 18:15:47


Cool - I was just proposing that we do this from within hwloc instead of implementing it separately everywhere.

On Nov 2, 2012, at 2:54 PM, Rayson Ho <raysonlogin_at_[hidden]> wrote:

> Ralph,
>
> We added cgroups integration support into Grid Engine a few months
> ago, and we ended up implementing routines that write values to
> "memory.memsw.limit_in_bytes", "memory.limit_in_bytes",
> "memory.soft_limit_in_bytes", etc... We just simply write the values
> out to the cgroups files.
>
> http://blogs.scalablelogic.com/2012/05/grid-engine-cgroups-integration.html
>
> I am interested in seeing how Greenplum/EMC implements cgroups limits.

I think my other note explained this - hopefully it helped :-)

>
>
> Brice - cgroups allow system administrators to set the resource limit
> of a process (or a group of processes) by interacting with the cgroups
> virtual fs. Once the processes are added to a cgroup, the memory usage
> limit can be added or changed as simply as just just by doing:
>
> # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes
>
> For details, see "3. User Interface":
> http://www.kernel.org/doc/Documentation/cgroups/memory.txt
>
> Rayson
>
> ==================================================
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
>
>
> On Fri, Nov 2, 2012 at 5:18 PM, Brice Goglin <Brice.Goglin_at_[hidden]> wrote:
>> Hello Ralph,
>>
>> I am not very familiar with these features. What system mechanism do you
>> currently use for this? Linux cgroups? Any concrete example of what you
>> would like to do?
>>
>> Brice
>>
>>
>>
>> Le 02/11/2012 22:12, Ralph Castain a écrit :
>>> Hi folks
>>>
>>> We (Greenplum) have a need to support resource limits (e.g., memory and cpu usage) on processes running under Open MPI's RTE. OMPI uses hwloc for processor and memory affinity, so this seems a likely place to add the required support. Jeff tells me that it doesn't yet exist in hwloc - I'm wondering if you would welcome and/or be willing to consider contributions from our engineers towards adding this capability?
>>>
>>> Obviously, we'd need to discuss how and where to do the extension. Just wanted to first see if this is an option, or if we should do it directly in OMPI.
>>> Ralph
>>>
>>>
>>> _______________________________________________
>>> hwloc-devel mailing list
>>> hwloc-devel_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-devel
>>
>> _______________________________________________
>> hwloc-devel mailing list
>> hwloc-devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-devel
>
> _______________________________________________
> hwloc-devel mailing list
> hwloc-devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-devel