Fixed this issue by setting the limits for all users in the file :
$ cat /etc/security/limits.d/custom.conf
* hard nofile 550000
* soft nofile 550000
REBOOT THE SERVER after setting the limits.
VERY IMPORTANT:
The /etc/security/limits.d/
folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/
and in the file /etc/security/limits.conf
.
CAUTION:
Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.
Hope this saves someone some hair – as I spent too much time pulling my hair out chunk by chunk!