Forkbomb??

Karl Hegbloom hegbloom at pdx.edu
Sat Mar 19 16:52:03 UTC 2005


On Sat, 2005-03-19 at 14:10 +0100, Simon Santoro wrote:
> Michael Hipp wrote:
> > Putting a limiting value on procs would help everyone and likely harm 
> > no-one.
> 
> I don't agree here. What if a program is designed to make a lot of 
> forks, for example to solve a math problem (factorize big numbers) or 
> something like that. That program could not run anymore.

The 'limits.conf' file allows setting a "hard" and a "soft" limit.  The
"hard" limit cannot be raised by anyone but 'root'.  The "soft" limit
can be changed by the user with the 'ulimit' shell built-in command.  A
very high "hard" limit and a lower "soft" limit would provide protection
yet still allow flexibility.

Why would a factoring program fork that many times?  What advantage
would it gain, computationally, on a single or even dual processor
system?  Forking more times than there are CPU's would not gain much.
In fact, it would increase process switching and interprocess
communication overhead.  Perhaps a parallel algorithm exists for
factoring or matrix operations, but surely it would not need thousands
of threads or processes to do it's job on a limited number of CPUs.






More information about the ubuntu-users mailing list