The default file descriptor limit (ulimit -n 1024) is too low
Stephan Hermann
sh at sourcecode.de
Mon Sep 20 17:06:09 BST 2010
On Mon, Sep 20, 2010 at 03:21:29AM -0700, Scott Ritchie wrote:
> Would there be any harm in raising this?
>
> I ask because I've seen a real world application hit this limit. The
> application in question is multithreaded and opens separate threads and
> files to work on for each core; on a 4 core machine it stays under the
> limit, while on an 8 core machine it hits it and runs into strange errors.
>
> I feel that, as we get more and more cores on machines applications like
> this are going to increasingly be a problem with a ulimit of only 1024.
I saw this as well recently here...but it was more a developers bug not closing
sockets or files, while trying to code in java.
In case this needs to be raised, this should be done on a per server decision by the sysadmin.
I don't see why it should be raised in general...
But I'm open for rationals.
Regards,
\sh
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
Url : https://lists.ubuntu.com/archives/ubuntu-devel/attachments/20100920/93821761/attachment.pgp
More information about the ubuntu-devel
mailing list