Karl Hegbloom hegbloom at
Sat Mar 19 04:08:12 UTC 2005

On Fri, 2005-03-18 at 21:57 +0100, Simon Santoro wrote:
> If you are a relatively clueless newbie that executes scripts written by 
> other people you are screwed anyway. That script could rm -rf your ~, 
> send your firefox profile folder via email to someone else, or do a lot 
> worse than forkbomb your pc.

:-)  Yes, I agree.

What if a bug in a program caused an inadvertent fork bomb?  Could

The default limit should be fairly high --- higher than the number of
processes anyone is likely to ever need, but limited, so that the kernel
knows to prevent this fork-bomb effect from bringing the system to a

 man setrlimit
 man limits.conf
 help ulimit | less

The thing is that Linux does not know how many processes you'll need,
and the assumption is probably that if you have it set to 'unlimited',
you "know what you're doing".  The system integration team has the
responsibility of setting a reasonable default via the limits.conf file.
Right now, there's not a default --- it's 'unlimited'.  It should be
something like 4098 or something.  Can you imagine running that many
processes at once for anything?  It's more than you need, but provides
an upper bound to stop a fork-bomb, whether it's caused by a bug in a
script or program or a trojan horse.

More information about the ubuntu-users mailing list