The default file descriptor limit (ulimit -n 1024) is too low

Evan Martin evan at chromium.org
Tue Sep 28 06:14:20 BST 2010


On Mon, Sep 27, 2010 at 9:47 PM, Scott Ritchie <scott at open-vote.org> wrote:
> I hadn't considered that use case, but it definitely sounds like a
> desktop application that might have a problem too.
>
> The specific app in question was Visual Studio running via Wine and
> compiling a large project.  On 4 cores it stayed under the limit, on 8
> it would compile more in parallel at once and hit the limit.  According
> to AppDB, it seems that Quicken runs into the same problem.

Speaking of compilers, gold
(http://packages.ubuntu.com/lucid/binutils-gold) is another (native)
example of a performance-intensive multithreaded app.  I have seen it
hit this 1024 cap while linking a large code base (Chromium) when
using GNU ar "thin" archives.

Gold has code to dynamically probe the fd ulimit, so I believe it will
use as many file descriptors as available (it maintains its list of
open files with some sort of LRU cache of fds).  This means it will
silently fall back on potentially-slower behavior, where it may close
and then later need to reopen a file, with a lower ulimit like 1024.
I have not investigated whether this hypothetical performance impact
actually exists.



More information about the ubuntu-devel mailing list