Forkbomb??
Stuart Bishop
stuart.bishop at canonical.com
Mon Apr 4 07:08:11 UTC 2005
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Simon Santoro wrote:
> In this case remember to also fix the following DOS:
> grep hello /dev/zero
> How? Limit the ammount of RAM the user is allowed to use? Heck... I
> bought 512MB because I want to actually USE it.
Yup. Limit the RAM to something sane and allow people to increase it if they
need to do something insane. Defaulting max RAM per process to being
the amount of physical RAM might be a good starting point, although I'd be
happier with a guesstimate being made by someone more familiar with the
memory requirements of the larger memory hogs supported in Ubuntu. This
seems the only solution in the short term until Gnome or Ubuntu is able to
provide another way to kill these runaways (I think the solution used by
other OSs such as OSX or Windows is a dedicated process-killer application
that is never allowed to be swapped out, hard wired to a key combination
that is impossible for other applications to mask or block).
Speaking from personal experience, I was much happier when the software I
was testing that occasionally decided it needed 1GB of RAM died with a
memory error rather than wedge my box and possibly corrupt my hard drive.
> And this DOS, that uses 100% CPU for no appearent reason:
> while true; do ls; done;
> How? Limit the ammount of CPU the user is allowed to use? Heck... I have
> a 1.4GHz Centrino because I want to USE it!
The UNIX scheduler takes care of this happily even without resorting to
nice. I regularly have processes chewing up all the CPU they can get yet the
system remains responsive.
The only times I've managed to wedge my box with Ubuntu badly enought to
require a power cycle has been when runaway processes hit > 550MB on this
768MB machine. At this point a process at nice level 0 starts thrashing and
if not caught before about 650MB it becomes impossible to shutdown or switch
to a text console.
forkbombs would the the other obvious one. Just set a sane limit. I'm not
talking about silly minimalist default limits like the default shell stack
size in OSX, but sane ones to catch runaways. 50? 100? 200?
> rm -rf ~
> How? Remove write access to my files because I could delete them.
Unfortunately apart from backup tools, the only way of protecting against
this would be to have snapshotting or undo facility built into the
filesystem such as is the case with high end fileservers such as NetApps and
similar. Thankfully there is already this safety net in the GUI in the form
of the trashcan so naieve users are protected, but getting this at the
filesystem level is something for the future.
- --
Stuart Bishop <stuart.bishop at canonical.com> http://www.canonical.com/
Canonical Ltd. http://www.ubuntu.com/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)
iD8DBQFCUOfbAfqZj7rGN0oRAgzWAJ91osAtX7otFl2hvoksIwXmhaqnUQCeIm7m
quL3dcpcPrWzyxzbIoDX+Gw=
=+aLr
-----END PGP SIGNATURE-----
More information about the ubuntu-users
mailing list