How to resolve Apache’s “Too many open files” error?

Where necessary, you may need to have access to a VPS server so you can follow how to implement the steps in this article.  You can get a cheaper VPS Server from Contabo with 4vCPU cores, 8GM RAM, and 32TB Bandwidth for less than $5.50 per month. Get this deal here now

Table of Contents

Cloud VPS S

$5.50 Monthly
  • 4 vCPU Cores | 8GB RAM


$15.50 Monthly
  • 6 vCPU Cores | 16GB RAM


$17.50 Monthly
  • 8 vCPU Cores | 24GB RAM

This error occurs when Apache exceeds its file descriptor limit, leading to degraded server performance or even crashes. When Apache runs, it uses file descriptors to manage files, sockets, and other I/O operations. Each system imposes a limit on the number of file descriptors a process can open simultaneously. When Apache surpasses this limit, it results in the “Too Many Open Files” error.

Verify the number of File Descriptors

File descriptors in Apache are low-level handles that represent open files, sockets, and other devices. Apache uses file descriptors to communicate with the operating system and to handle incoming and outgoing requests.

Each file descriptor is represented by an integer value. Apache uses the following file descriptors for specific purposes:

  • 0: Standard input (stdin)
  • 1: Standard output (stdout)
  • 2: Standard error (stderr)
  • 3: Listening socket for the main server
  • 4: Listening socket for the SSL server (if enabled)
  • Other: Additional file descriptors for log files, CGI scripts, and other purposes

Apache uses a pool of file descriptors to manage its connections. When a new connection is established, Apache allocates a file descriptor from the pool and assigns it to the connection. When the connection is closed, Apache releases the file descriptor back into the pool.

The number of file descriptors that Apache can use is limited by the operating system. The default limit is typically 64, but it can be increased. If Apache runs out of file descriptors, it will not be able to accept new connections.

There are a couple of factors that can cause Apache to run out of file descriptors, including:

  • A large number of concurrent connections
  • A large number of open log files
  • A large number of CGI scripts running
  • A bug in Apache or one of its modules

Use the ulimit command

To verify the maximum number of file descriptors allowed for the Apache process on a Linux system, you can use the following command:

ulimit -Hn | grep apache

This will display the hard limit on the number of open files for the Apache user. The output will look something like this:

apache - Max open files (hard) 1024

This means that the Apache process can have a maximum of 1024 open file descriptors.

If you need to increase the maximum number of file descriptors allowed for the Apache process, you can do so by changing the hard limit for the Apache user. You can do this using the ulimit command with the -S option. For example, to increase the hard limit to 2048, you would use the following command:

ulimit -SHn 2048

You will need to restart the Apache process after changing the hard limit.

Note: It is important to note that increasing the maximum number of file descriptors can have a negative impact on system performance. It is important to only increase the limit to the extent necessary.

Using the lsof command

Use tools like lsof (list open files) to identify which processes are consuming a large number of file descriptors.

sudo lsof -n | awk '{print $2}' | sort | uniq -c | sort -n

The above command is used to count and display the number of open file descriptors per process ID. The output will look something like this:

      1 1234
      2 5678
      3 9876
      5 5432
     10 8765
     15 4321

Here’s what each column represents:

  • The first column is the count of open file descriptors.
  • The second column is the process ID (PID) of the corresponding process.

In the example above:

  • Process with ID 1234 has 1 open file descriptor.
  • Process with ID 5678 has 2 open file descriptors.
  • Process with ID 9876 has 3 open file descriptors.
  • …and so on.

This output helps you identify which processes have a higher number of open file descriptors, which can be useful for troubleshooting and optimizing resource usage.

Identify the Open files limit

Open files limit is the maximum number of open files that a process can have at any given time. This limit is set by the operating system and can be configured by the system administrator

The difference between the open files limit and file descriptors is that the open files limit is a global limit for a process, while file descriptors are individual handles that are used to access resources.

For example, a web server may have an open files limit of 1024. This means that the web server can have up to 1024 open files at any given time. However, the web server may have many more than 1024 file descriptors. This is because each open connection to the web server requires a file descriptor.

When a process opens a file, the operating system assigns it a file descriptor. The process can then use the file descriptor to read from or write to the file. When the process is finished with the file, it closes the file descriptor.

The operating system maintains a table of all open file descriptors. This table is used to track which processes have access to which resources.

The open files limit is important because it prevents processes from using up too many resources and harming the performance of the system.

Here is a table that summarizes the key differences between open files limits and file descriptors:

CharacteristicOpen files limitFile descriptor
TypeGlobal limitIndividual handle
PurposePrevents processes from using up too many resourcesIdentifies resources

Increase the Open files limit

To increase the open files limit, you can edit the following system configuration files:

  • /etc/sysctl.conf to add the following lines to the end of the file:

nano /etc/sysctl.conf

When you run the above command, make sure configurations are set as seen below:

fs.file-max = 65535
net.core.somaxconn = 65535
  • /etc/security/limits.conf: Add the following lines to the end of the file:
nano /etc/security/limits.conf

Adjust or add the following lines to the opened file

* soft nofile 65535
* hard nofile 65536

Once you have made these changes, you need to apply them by running the following command:

sudo sysctl -p

Adjust Apache Configuration

If you are using the worker MPM (Multi-Processing Module), you may need to adjust the configuration to limit the number of server processes and threads:

<IfModule mpm_worker_module>
    ServerLimit 100
    StartServers 5
    MaxClients 100
    MinSpareThreads 5
    MaxSpareThreads 10
    ThreadsPerChild 5
    MaxRequestsPerChild 0

If you are using the prefork MPM, you can adjust the configuration in a similar way:

<IfModule mpm_prefork_module>
    StartServers       8
    MinSpareServers    5
    MaxSpareServers   20
    MaxClients        150
    MaxRequestsPerChild  0

Check Apache Modules

Some Apache modules can cause file descriptor leaks. Disable unnecessary modules in the configuration file to reduce the overall usage of file descriptors.

Use Efficient File Handling

Opt for efficient file-handling methods in your web application code. Close file handles promptly after usage to prevent leaks.

RECOMMENDED READING: How to load various modules in the Apache server

Monitor and Analyze

Utilize server monitoring tools to keep an eye on Apache’s file descriptor usage. Tools like lsof (list open files) can help identify which files or sockets are consuming the most descriptors.

RECOMMENDED READING: How to Analyze Unusual Processes in Linux Systems?

Consider Resource Limits in Virtual Hosts

If you’re hosting multiple websites using Apache virtual hosts, set appropriate resource limits for each virtual host to prevent one site from exhausting all available file descriptors.

RECOMMENDED READING: How to Configure Resource Limits in Apache Virtual Hosts


Resolving the Apache “Too Many Open Files” error requires a combination of system-level adjustments, configuration optimizations, and vigilant monitoring. By understanding the root causes and following the steps outlined in this guide, you can effectively mitigate this issue and ensure smooth operation of your Apache web server.

Hire us to handle what you want

Hire us through our Fiverr Profile and leave all the complicated & technical stuff to us. Here are some of the things we can do for you:

  • Website migration, troubleshooting, and maintenance.
  • Server & application deployment, scaling, troubleshooting, and maintenance
  • Deployment of Kubernetes, Docker, Cloudron, Ant Media, Apache, Nginx,  OpenVPN, cPanel, WHMCS, WordPress, and more
  • Everything you need on AWS, IBM Cloud, GCP, Azure, Oracle Cloud, Alibaba Cloud, Linode, Contabo, DigitalOcean, Ionos, Vultr, GoDaddy, HostGator, Namecheap, DreamHost, and more.

We will design, configure, deploy, or troubleshoot anything you want. Starting from $10, we will get your job done in the shortest time possible. Your payment is safe with Fiverr as we will only be paid once your project is completed.