On any Linux most if not everything "is a file".
This is also the case for network connections. Thus, the error message "Too many open files" is more likely the problem of "Too many open connections".
The message comes from the operating system to tell you that the application opened a large number of connections. The Linux kernel has several securities to prevent an application from slowing the system by opening too many file handles.
In Linux there are two limits:
- A global limit specifying the total amount of open file descriptors by the whole system
- A per-process limit specifying the total amount of open files that can be opened by an application
Specifying the global limit
Determine the global limit with the command cat /proc/sys/fs/file-nr
The result is a set of 3 numbers:
1408 0 380226
- The amount of currently opened file handles
- The number of free allocated file handles
- The maximum number of allowed file handles for the whole system
fs.file-max = 380180
After editing you have to reconnect or restart your system for changed to be taken into account.
Specifying the per-process limit
To get the maximum value of file handles an application can open run this command ulimit -n
Be aware, that this limit depends on the user running the application. You might get different results depending on the user running ulimit -n
On Ubuntu systems the value is set to 1024 which is in certain situations too low.
To permanently increase the value edit file /etc/security/limits.conf as root and add the following lines
* hard nofile 65536
* soft nofile 65536
root hard nofile 65536
root soft nofile 65536
With these settings any application running under any user can open a maximum amount of 65536 files. That should be enough in most cases.
The meaning of the four columns are:
- The name of the user the setting applies to. A "*" means every user except for user "root"
- Either "hard" or "soft" are possible values. A hard limit is fixed and can not be modified by a non-root user. A soft limit may be modified to the value up to the hard limit
- The type of resource to be limited. "nofile" means number of open files.
- The number of the limit to be set. In the case above the limit is always set to 65536
There is also a temporary method to set the per-process limit.
ulimit -n 65536
This limit of 65536 open files only applies for the life of the session or shell and is lost if you reconnect or restart your system. The value cannot be higher than those specified in /etc/security/limits.conf except if you are acting as user root.
How to measure the number of open files used?
To get the number of open files used by your application type:
ps -aux | grep $APPLICATION_NAME
$APPLICATION_NAME is the name of the process you want to get the process id for. This can be "java" for a running Java virtual machine or "tomcat" for a running tomcat container or XXX for any other application.
Copy the process id and enter:
lsof -p $PROCESS_ID | wc -l
where $PROCESS_ID is the process id you received from the ps command. The command returns the number of open files used by the process you specified.
Great insights! I really appreciate how clearly you’ve outlined the topic. Your post has provided some valuable clarity. Thanks for sharing!
ReplyDeleteIncrease the file descriptor limit using HostMyCode ulimit -n or modify /etc/security/limits.conf for a permanent solution.
ReplyDelete