Total Pageviews

Thursday, July 12, 2012

Solving "Too many open files" under Linux

Every now and then I got this error and everytime I start googling around how to fix this. So now I have decided to write it down and publish it to avoid using Google or any other search engines.

On any Linux most if not everything "is a file".
This is also the case for network connections. Thus, the error message "Too many open files" is more likely the problem of "Too many open connections".

The message comes from the operating system to tell you that the application opened a large number of connections. The Linux kernel has several securities to prevent an application from slowing the system by opening too many file handles.

In Linux there are two limits:
  • A global limit specifying the total amount of open file descriptors by the whole system
  • A per-process limit specifying the total amount of open files that can be opened by an application

Specifying the global limit

Determine the global limit with the command

 cat /proc/sys/fs/file-nr  

The result is a set of 3 numbers:

 1408  0    380226  

  1. The amount of currently opened file handles
  2. The number of free allocated file handles
  3. The maximum number of allowed file handles for the whole system
The maximum number on Ubuntu systems is somewhere around 300000. This should suffice in most of the cases. However, if you want to increase the size edit the file /etc/sysctl.conf as root and add/edit this:

 fs.file-max = 380180  

After editing you have to reconnect or restart your system for changed to be taken into account.

Specifying the per-process limit

To get the maximum value of file handles an application can open run this command

 ulimit -n  

Be aware, that this limit depends on the user running the application. You might get different results depending on the user running ulimit -n

On Ubuntu systems the value is set to 1024 which is in certain situations too low.
To permanently increase the value edit file /etc/security/limits.conf as root and add the following lines

 *      hard  nofile   65536  
 *      soft  nofile   65536  
 root   hard  nofile   65536  
 root   soft  nofile   65536  

With these settings any application running under any user can open a maximum amount of 65536 files. That should be enough in most cases.

The meaning of the four columns are:
  1. The name of the user the setting applies to. A "*" means every user except for user "root"
  2. Either "hard" or "soft" are possible values. A hard limit is fixed and can not be modified by a non-root user. A soft limit may be modified to the value up to the hard limit
  3. The type of resource to be limited. "nofile" means number of open files.
  4. The number of the limit to be set. In the case above the limit is always set to 65536
You have to reboot your system to apply the changes.

There is also a temporary method to set the per-process limit.

 ulimit -n 65536  

This limit of 65536 open files only applies for the life of the session or shell and is lost if you reconnect or restart your system. The value cannot be higher than those specified in /etc/security/limits.conf except if you are acting as user root.

How to measure the number of open files used?


To get the number of open files used by your application type:

 ps -aux | grep $APPLICATION_NAME  

$APPLICATION_NAME is the name of the process you want to get the process id for. This can be "java" for a running Java virtual machine or "tomcat" for a running tomcat container or XXX for any other application.

Copy the process id and enter:

 lsof -p $PROCESS_ID | wc -l  

where $PROCESS_ID is the process id you received from the ps command. The command returns the number of open files used by the process you specified.

No comments:

Post a Comment