The default file descriptor limit (ulimit -n 1024) is too low

Aigars Mahinovs aigarius at debian.org
Mon Sep 27 21:50:47 BST 2010


On 20 September 2010 13:21, Scott Ritchie <scott at open-vote.org> wrote:
> Would there be any harm in raising this?
>
> I ask because I've seen a real world application hit this limit.  The
> application in question is multithreaded and opens separate threads and
> files to work on for each core; on a 4 core machine it stays under the
> limit, while on an 8 core machine it hits it and runs into strange errors.
>
> I feel that, as we get more and more cores on machines applications like
> this are going to increasingly be a problem with a ulimit of only 1024.

I've seen this limit being hit on a far more pedestrian application -
Vuze. Imagine a typical backwater teen launching a BitTorrent client
and downloading all his favorite ... Linux distributions. Let's say
there is are 20 downloads going at once, each tries to download from
60 people and upload to 40 people. Oops, that's over 2000 open file
handles.

P.S. I am assuming that this was the reason why the person in question
saw the Cannot open file 'too many open files' error that went away
after changing the limit in /etc/security/limits.conf . It is also
possible that there was a lot of files in these download and that the
seeds and peers gave random pieces of random files in such a way that
most of the files were open at the time.

-- 
Best regards,
    Aigars Mahinovs        mailto:aigarius at debian.org
  #--------------------------------------------------------------#
 | .''`.    Debian GNU/Linux (http://www.debian.org)            |
 | : :' :   Latvian Open Source Assoc. (http://www.laka.lv)     |
 | `. `'    Linux Administration and Free Software Consulting   |
 |   `-                                 (http://www.aiteki.com) |
 #--------------------------------------------------------------#



More information about the ubuntu-devel mailing list