Maintaining responsiveness using cgroups

Serge Hallyn serge.hallyn at canonical.com
Tue Aug 21 21:30:01 UTC 2012


Quoting Jason Rohrer (jasonrohrer at fastmail.fm):
> I've been using gnu/linux as my main daily work system for about 12 years, and been using Ubuntu since Feisty (2007?).  Some of my games have even made it into the repository.  Ubuntu's great stuff.
> 
> Every once in a while, in my work, or due to a leak/bug in some application (e.g., Firefox), one process will start allocating an unexpected amount of memory.  For example, if I try to process too big of an image by accident with ImageMagick.  As the process demands more memory, the kernel tries to provide it.  Other processes, including system critical ones, get swapped out.  This renders the system almost completely unresponsive.  Of course, as the user, you realize what's going on and try to kill the process.  But sometimes, when Xorg gets swapped out, and your terminal shell gets swapped out, and the mouse stutters... well, you can't really do anything.  Sometimes, if it gets really bad (like two processes in a swap war), even virtual terminals (cntrl-alt-F4, e.g.) can't properly log in.  Some keyboards (like mine!) don't have a SysRq key.  Sometime, hitting the power button is the only solution.
> 
> Obviously, this is user or programmer error.  Tons of people online claim this is "normal" behavior.  But consider the end-user experience---frustrating even for the power-user.  Yes, this happens on OSX and Windows sometimes too, but... you know, Linux can do better than that!  It should be rock solid no matter what your processes do.
> 
> If you look at how the kernel handles CPU hogging processes, it hardly affects the system responsiveness at all (great!).  But when it comes to RAM-hogging processes, the kernel allows one process to dominate, by default.  And /etc/security/limits.conf has been de-toothed since 2.4.30 to ignore the rss limit that could help with this problem (and limits.conf ships with all limits off by default anyway).
> 
> Enter the new (as of 2.6.24) kernel feature:  cgroups
> 
> Here, we can set a hard RSS limit for all processes of a given user, or even group of users.  A limit on the total.  Which is actually what we want, and better than a per-process limit.  If a user spawns several memory-hungry processes at the same time, a per-process limit would still allow Xorg to get swapped out.  But a group limit can ensure there's always some physical RAM left for critical (root-owned) processes.
> 
> I've tried cgroups on my system (basically setting a limit for myself that is a portion of my physical RAM, leaving enough RAM for system critical processes always).  And the results are DRAMATIC.  No matter what I do, the system never becomes unresponsive now.  Even in the biggest swap war between MY processes, I can still switch virtual desktops instantly, switch windows, kill a process, etc.
> 
> After 12 years of suffering from this occasional (and pretty much only) instability of Linux, I think I've finally fixed it.
> 
> Of course, I had to install libcgroup1 and cgroup-bin, which weren't there by default.  And I had to study the (extremely complicated) cgroup documentation to get this setup for myself.  Unlikely that most users are going to go out of their way to set this up.
> 
> 
> FINALLY, after all that background, here's my suggestion:  
> 
> Why isn't this the DEFAULT configuration for desktop users?  Set some fraction of physical RAM aside for system-critical processes, and then give the users the rest with a cgroup.  Then, the user's system would never become unresponsive due to user error.  And I can't think of a situation when you'd really WANT to have all physical RAM devoted to a user process at the expense of system-critical processes.  At least not in a desktop environment.
> 
> You know, the default Ubuntu behavior is not to dump core files (so as not to waste disk space of unsuspecting users).  I think all non-root user processes should be in a RAM-limited cgroup as default Ubuntu behavior (so as to never cause unsuspecting users to reach for the power button as a last resort).
> 
> 
> Thoughts?

Hi,

Yes, I sometimes have to do this as well.  My biggest and fastest laptop,
unfortunately, also has an overheating problem.  So I often make a new
cgroup with say 2G ram, 4 cpus, and a freezer so I can quickly pause what
it is doing if I need to.

Unfortunately the idea of automatically moving tasks into cgroups as
they are started doesn't quite work.  It would need to have some more
kernel help in order to do that reliably.  The newest Debian package
actually (temporarily) removes the starting of libcgroup daemons (by
default) at boot.  However, the intent is to reintroduce at least the
configurable setup of the cgroups at boot, and that combined with
pam_cgroup should help to do what you like.

I'm Cc:ing Jon, the libcgroup maintainer.  Perhaps a cgroup-desktop
package (which eventually could be promoted to main and become default)
could at least provide a proof of concept.

-serge

> And pasted below is a description of what I did (on my 512MB system).  Simple, assuming you can figure out how to do it.
> 
> Thanks for your time,
> Jason
> 
> 
> 
> 
> $ sudo apt-get install libcgroup1 cgroup-bin
> 
> Then, edit /etc/cgconf.conf to add a new control group:
> 
> group regularUsers {
>     memory {
>         memory.limit_in_bytes = 256M;
>     }
> }
> 
> Then, edit /etc/cgrules.conf to add a given user to the regularUsers group:
> 
> jasonrohrer    memory   regularUsers/
> 
> 
> Finally, restart cgroups (or reboot):
> 
> $ sudo service cgconfig restart
> $ sudo service cgred restart
> 
> -- 
> ubuntu-devel mailing list
> ubuntu-devel at lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel



More information about the ubuntu-devel mailing list