What does "ulimit -s unlimited" do?
Asked Answered
H

3

67

There are understandably many related questions on stack allocation

What and where are the stack and heap?

Why is there a limit on the stack size?

Size of stack and heap memory

However on various *nix machines I can issue the bash command

ulimit -s unlimited

or the csh command

set stacksize unlimited

How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)?

In case more system details are relevant, I'm mostly concerned with programs compiled with GCC on Linux running on x86_64 hardware.

Hypostasize answered 23/1, 2013 at 2:39 Comment(0)
P
37

When you call a function, a new "namespace" is allocated on the stack. That's how functions can have local variables. As functions call functions, which in turn call functions, we keep allocating more and more space on the stack to maintain this deep hierarchy of namespaces.

To curb programs using massive amounts of stack space, a limit is usually put in place via ulimit -s. If we remove that limit via ulimit -s unlimited, our programs will be able to keep gobbling up RAM for their evergrowing stack until eventually the system runs out of memory entirely.

int eat_stack_space(void) { return eat_stack_space(); }
// If we compile this with no optimization and run it, our computer could crash.

Usually, using a ton of stack space is accidental or a symptom of very deep recursion that probably should not be relying so much on the stack. Thus the stack limit.

Impact on performace is minor but does exist. Using the time command, I found that eliminating the stack limit increased performance by a few fractions of a second (at least on 64bit Ubuntu).

Presumably answered 23/1, 2013 at 3:5 Comment(9)
with gcc-4.7 -O3 your smash_the_stack example being tail-recursive is translated to an endless loop without any calls.Imprescriptible
@BasileStarynkevitch Compilers are so smart thse days. I altered the example to make it harder for gcc to optimize away, sorry about that!Presumably
Even the improved example with a printf before the recursive call is compiled to a loop with gcc-4.7 -O3Imprescriptible
@BasileStarynkevitch Damn. I think the point is made pretty clearly. If you want gcc to not optimize away the stack smashing for educational expirementation just compiled it with gcc -O0.Presumably
No, the good way is to avoid tail recursion. Put some useful code before and after the recursive call.Imprescriptible
@BasileStarynkevitch I tested this code 4.4 (I do not have 4.7 on this system) and it is not compiled to a loop.Presumably
@Maxwell Two mistakes: the stack size limit has nothing to do with preventing the "whole system" from crashing. RAM is RAM, it's the kernel's job to decide how to map it for stack or heap. Making a 100GB stack is no more harmful than allocating 100GB on the heap: the operations will either fail (sbrk or exec fail), or there'll be an overcommit and processes will be killed when you use the memory until the system can honour its memory commitments again. In either case, the integrity of the whole system is safe. There's nothing a process can do to defeat the kernel.Shepperd
@Maxwell Secondly, exceeding the stack size is a completely different problem to stack smashing. When a stack overflow happens, the kernel kills the process and that's that. Nothing is written to the stack that shouldn't be, and no harm results (apart from process termination).Shepperd
This answer is plain wrong. Each process on Linux has its own stack - no such thing as system stack space. Stack grows downwards on x86 and an overflow occurs when the top of the stack (physically the bottom of the stack memory) hits the pre-set limit or meets another memory mapping. Stack buffer overflows occur in the opposite direction. It would be hard to overwrite the return address if the stack would instead grow upwards.Anachronous
E
6

ulimit -s unlimited lets the stack grow unlimited.

This may prevent your program from crashing if you write programs by recursion, especially if your programs are not tail recursive (compilers can "optimize" those), and the depth of recursion is large.

Elinoreeliot answered 14/2, 2017 at 15:32 Comment(3)
Why not make "unlimited" the default stack size? My use case doesn't involve recursion, but rather old Fortran programs with big static arrays that exceed the default on most systems.Hypostasize
because a completely buggy program could crash your system. This option must be used only if you trust the program not to eat all available memoryDrill
A buggy program can also keep allocating from the heap (malloc) and crash/freeze the system. Systems don't typically put heap limits.Elinoreeliot
L
2

stack size can indeed be unlimited. _STK_LIM is the default, _STK_LIM_MAX is something that differs per architecture, as can be seen from include/asm-generic/resource.h:

/*
 * RLIMIT_STACK default maximum - some architectures override it:
 */
#ifndef _STK_LIM_MAX
# define _STK_LIM_MAX           RLIM_INFINITY
#endif

As can be seen from this example generic value is infinite, where RLIM_INFINITY is, again, in generic case defined as:

/*
 * SuS says limits have to be unsigned.
 * Which makes a ton more sense anyway.
 *
 * Some architectures override this (for compatibility reasons):
 */
#ifndef RLIM_INFINITY
# define RLIM_INFINITY          (~0UL)
#endif

So I guess the real answer is - stack size CAN be limited by some architecture, then unlimited stack trace will mean whatever _STK_LIM_MAX is defined to, and in case it's infinity - it is infinite. For details on what it means to set it to infinite and what implications it might have, refer to the other answer, it's way better than mine.

Leanora answered 23/1, 2013 at 3:2 Comment(3)
That doesn't seem to be true. On my system the value in linux/resource.h is 10*1024*1024. This is also the value printed by "ulimit -s". I compiled a test program with a 20 MB static array, and it segfaults. However, if I issue the command "ulimit -s unlimited" it does not crash.Hypostasize
Hmm, that's interesting. I was under the impression that it wouldn't go over that limit.Leanora
@BrianHawkins: mea culpa, did some more research, adjusted answer. It this info is irrelevant I can remove the whole answer alltogether.Leanora

© 2022 - 2024 — McMap. All rights reserved.