How to increase ulimit on Amazon EC2 instance?
Asked Answered
L

4

46

After SSH'ing into an EC2 instance running the Amazon Linux AMI, I tried:

ulimit -n 20000

...and got the following error:

-bash: ulimit: open files: cannot modify limit: Operation not permitted

However, the shell allows me to decrease this number, for the current session only.

Is there anyway to increase the ulimit on an EC2 instance (permanently)?

Lyrebird answered 5/7, 2012 at 10:12 Comment(0)
R
91

In fact, changing values through the ulimit command only applies to the current shell session. If you want to permanently set a new limit, you must edit the /etc/security/limits.conf file and set your hard and soft limits. Here's an example:

# <domain> <type> <item>  <value>
    *       soft  nofile  20000
    *       hard  nofile  20000

Save the file, log-out, log-in again and test the configuration through the ulimit -n command. Hope it helps.

P.S. 1: Keep the following in mind:

  • Soft limit: value that the kernel enforces for the corresponding resource.
  • Hard limit: works as a ceiling for the soft limit.

P.S. 2: Additional files in /etc/security/limits.d/ might affect what is configured in limits.conf.

Regeneration answered 5/7, 2012 at 13:23 Comment(8)
Re - PS2 - if you wish to change these values, its recommended to add a file in the limit.d as they take priority. Don't edit limits.conf directly is my suggestion.Mimi
Are you sure you need to reboot? I think logging out and back in will suffice.Peru
One can use "ulimit -n" to see number of open files allowed only (as opposed to "ulimit -a")Inesinescapable
What is the filename I need to use in limits.d? Is limits.conf ok?Herewith
@Herewith yes, it's ok. You can choose any regular file name, as long as the extension is confRegeneration
mongodb has official documentation for this issue now: docs.mongodb.com/ecosystem/platforms/amazon-ec2Compulsion
Thanks a lot! The ELK docker configuration does also require the ulimit to be increased. I got stuck because I've had only set the soft limit before I read your answer.Dullish
Be warned, I've locked myself out of SSHing into an EC2 instance this way. I set almost everything to unlimited, so perhaps that was too aggressive. If this does happened, you can alway delete your /etc/security/limits.d/whatever file using the User Data feature on boot up.Aragonite
S
4

Thank you for the answer. For me just updating /etc/security/limits.conf wasn't enough. Only the 'open files' ulimit -n was getting updated and nproc was not getting updated. After updating /etc/security/limits.d/whateverfile, nproc "ulimit -u" also got updated.

Steps:

  • sudo vi /etc/security/limits.d/whateverfile
  • Update limits set for nproc/ nofile
  • sudo vi /etc/security/limits.conf
*  soft  nproc  65535
*  hard  nproc  65535
*  soft  nofile 65535
*  hard  nofile 65535
  • Reboot the machine sudo reboot

P.S. I was not able to add it as a comment, so had to post as an answer.

Surfactant answered 25/10, 2019 at 19:55 Comment(1)
Thanks for your answer. I had to set the limits both in limits.conf and limits.d/whateverifle for the ulimit to properly take effect. I did this on a Ubuntu 18.04 AMI on ec2 instance.Pi
T
3

I don't have enough rep points to comment...sorry for the fresh reply, but maybe this will keep someone from wasting an hour.

Viccari's answer finally solved this headache for me. Every other source tells you to edit the limits.conf file, and if that doesn't work, to add

session   required    pam_limits.so

to the /etc/pam.d/common-session file

DO NOT DO THIS!

I'm running an Ubuntu 18.04.5 EC2 instance, and this locked me out of SSH entirely. I could log in, but as soon as it was about to drop me into a prompt, it dropped my connection (I even saw all the welcome messages and stuff). Verbose showed this as the last error:

fd 1 is not O_NONBLOCK

and I couldn't find an answer to what that meant. So, after shutting down the instance, waiting about an hour to snapshot the volume, and then mounting it to another running instance, I removed the edit to the common-session file and bam, SSH login worked again.

The fix that worked for me was looking for files in the /etc/security/limits.d/ folder, and editing those.

(and no, I did not need to reboot to get the new limits, just log out and back in)

Tetany answered 4/2, 2021 at 1:25 Comment(0)
S
0

Maybe useful for someone in the future. My problem was that I had a long running Jupyter notebook (same would be for a python session, or a python script that went into pdb), and then I could not pickle some results into a file because "too many files were open".

My solution (actually GPT helped me to find this) was:

  1. Get the PID of the running python process
import os  
  
pid = os.getpid()  
print(pid)  
  1. $ prlimit --pid <PID> --nofile=10000:10000

Set 10000 into whatever "ulimit -n" gives you + e.g. 10. That should be enough for my use case

In this case, I did not run into permissions issues.

Sky answered 12/7, 2023 at 23:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.