The default file descriptor limit (ulimit -n 1024) is too low

Etienne Goyer etienne.goyer at canonical.com
Mon Sep 20 17:18:45 BST 2010


On 10-09-20 12:06 PM, Stephan Hermann wrote:
> On Mon, Sep 20, 2010 at 03:21:29AM -0700, Scott Ritchie wrote:
>> Would there be any harm in raising this?
>>
>> I ask because I've seen a real world application hit this limit.  The
>> application in question is multithreaded and opens separate threads and
>> files to work on for each core; on a 4 core machine it stays under the
>> limit, while on an 8 core machine it hits it and runs into strange errors.
>>
>> I feel that, as we get more and more cores on machines applications like
>> this are going to increasingly be a problem with a ulimit of only 1024.
> 
> I saw this as well recently here...but it was more a developers bug not closing
> sockets or files, while trying to code in java.
> 
> In case this needs to be raised, this should be done on a per server decision by the sysadmin.
> I don't see why it should be raised in general...

That's the thing: AFAICT, there is no single place where you can raise
that value system-wide.  Doing so for daemon involve invoking ulimit
from within their init script (a hack at best).  Or perhaps there *is* a
way to raise it globally that I do not know about, I which case I would
love to know about it. :)

Also, if you turn the question around, is there a good reason *not* to
raise that limit?


-- 
Etienne Goyer
Technical Account Manager - Canonical Ltd
Ubuntu Certified Instructor   -    LPIC-3

 ~= Ubuntu: Linux for Human Beings =~



More information about the ubuntu-devel mailing list