Skip to content

How to increase the inotify.max_user_watches and inotify.max_user_instances sysctls on a Linux host

Article Number: 000020048

Environment

Linux Host / Kubernetes Node

Situation

The sysctls fs.inotify.max_user_instances and fs.inotify.max_user_watches define the upper limits on the number of inotify resources and file watches a user can create. If these limits are reached, processes may fail with errors such as:

  • ENOSPC: System limit for number of file watchers reached...
  • The configured user limit (128) on the number of inotify instances has been reached
  • The default defined inotify instances (128) has been reached

In a Kubernetes cluster, this behavior often results in failing Pods with logs containing the errors above. This article details how to check and increase these limits.

Resolution

1. Check current limits

To check the current inotify user instance limit, run:

cat /proc/sys/fs/inotify/max_user_instances

To check the current inotify user watch limit, run:

cat /proc/sys/fs/inotify/max_user_watches

2. Update the limits

Temporary - Applied immediately, lost on reboot

Run the following commands to increase the limits (using 8,192 and 524,288 as examples):

sudo sysctl fs.inotify.max_user_instances=8192
sudo sysctl fs.inotify.max_user_watches=524288

Permanent - Persistent across reboots

Add the following lines to /etc/sysctl.conf (or a dedicated file in /etc/sysctl.d/):

fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=524288

After saving the file, apply the changes by running:

sudo sysctl -p

3. Validate the changes

You can verify the updates on the host by re-running the cat commands from Step 1.

To check if the updated values are reflected within a running pod, execute the following:

kubectl exec -it -n <pod-namespace> <pod-name> -- cat /proc/sys/fs/inotify/max_user_instances
kubectl exec -it -n <pod-namespace> <pod-name> -- cat /proc/sys/fs/inotify/max_user_watches

If changes are not reflected within the Pods, you may need to restart the Pods or reboot the host.