How to change the stack size using ulimit or per process on Mac OS X for a C or Ruby program?
Asked Answered
D

5

29

It seems that the recommended way to set stack size for a C program or Ruby program (which uses the C stack), is by using ulimit in the Bash shell. But

$ ulimit -s
8192

$ ulimit -s 16384
-bash: ulimit: stack size: cannot modify limit: Operation not permitted

and sudo doesn't help either. Is there a way to set it to 16MB, 32MB, or 64MB? I thought there should be a way to set it per program invocation instead of setting a system wide parameter as well?

Right now 8192 probably means 8MB which is quite small, if that is compared to how much a process can be using, sometimes as much as 2GB of RAM.

(updated note: ulimit -a can show its current values).

(update 2: it actually seems like ulimit -s <value> is per shell, and that if you set it the first time, it usually works. The problem is when you set it the second time, then it may return an error)

Dimorph answered 6/11, 2012 at 5:37 Comment(2)
I wonder if this error is related to the "hard limit" vs. "soft limit" thing with ulimit.Ola
ulimit stack size can only be lowered once set, ive tried to answer everything, let me know if you have any other questions.Leary
L
23

Apparently there is a hard limit on the stack size for mac os x, taken from http://lists.apple.com/archives/scitech/2004/Oct/msg00124.html granted this is quite old, and Im not sure if its still true anymore, but to set it simply call ulimit -s hard, its 65532. or about 65 megs.

I did some tests on snow leopard, 10.6.8, and it does seem to be true.

$ ulimit -a
...
stack size              (kbytes, -s) 8192
...
$ ulimit -s 65533
-bash: ulimit: stack size: cannot modify limit: Operation not permitted
$ ulimit -s 65532
$

I also found this http://linuxtoosx.blogspot.com/2010/10/stack-overflow-increasing-stack-limit.html though I haven't test it, so can't really say much about it.

When applications consume gigs of memory thats usually taken from the heap, the stack is usually reserve for local automatic variables that exist for a relatively small amount of time equivalent to the lifespan of the function call, the heap is where most of the persistent data lives.

here is a quick tutorial:

#include <stdlib.h>

#define NUMBER_OF_BYTES 10000000 // about 10 megs
void test()
{
   char stack_data[NUMBER_OF_BYTES];          // allocating on the stack.
   char *heap_data = malloc(NUMBER_OF_BYTES); // pointer (heap_data) lives on the stack, the actual data lives on the heap.
}

int main()
{   
    test(); 
    // at this point stack_data[NUMBER_OF_BYTES] and *heap_data have being removed, but malloc(NUMBER_OF_BYTES) persists.
    // depending on the calling convention either main or test are responssible for resetting the stack.
    // on most compilers including gcc, the caller (main) is responssible.

    return 0;
}

$ ulimit -a
...
stack size              (kbytes, -s) 8192
...
$ gcc m.c
$ ./a.out
Segmentation fault
$ ulimit -s hard
$ ./a.out
$

ulimit is only temporary you would have to update it every time, or update your corresponding bash script to set it automatically.

Once ulimit is set it can only be lowered never raised.

Leary answered 6/11, 2012 at 23:59 Comment(4)
Yes, if applications consume gigas of memory, they should gain space from heap, not from the stack. It's not reasonable to allocate huge object or large arrays in the stack. If the app would like to use 2GB RAM as stack, what size of memory space should be reserved for heap?Misogamy
@Misogamy not sure what you mean, fundamentally the OS is in charge of memory whether we call it stack or heap, as such its OS dependent as to what happens, with some memory schemes being quite complex, in linux we have virtual memory mapped to a page table containing pages some of which may be invalid, as such the OS doesn't really allocate 2GB of stack unless it actually really needs to, instead you'll get page faults causing the OS to allocate a new page, of course if there aren't any more free pages, then it may halt your program or crash.Leary
I see your point. OS doesn't really allocate 2GB if you just specify the size but the app doesn't use up to 2GB. OS manages memory in pages, and maps real page on demands. If the program crashed due to insuff. stack size, it definitely means the app does require more stack size. Therefore, if one app must be run well as much as possible like 2GB, I think that large stack doesn't make sense, it does not like heap for a process can use as much as 2GB of RAM. That's why many desktops or servers have 4GB,8GB or more memory space, but each process has still only 4MB/8MB stack by default.Misogamy
Rarely will the os ever tell you anything interesting besides the obiquous segmentation fault, when youve exceeded your stack, or memory resources this is because neither the the stack or heap are actually in contiguous physical memory even if for the program it may look like the stack is continuous in reallity it's all over the place, as for the small default stack there are two reason for that 1) on avg most programs don't use that much stack space 2) safe guard against infinite loops, if default stack size was unlimited a single infinite loop in any program would consume all the memory.Leary
H
8

To my mind the accepted answer is not totally right and leads to miss comprehension, more specifically the last statement is not true.

Once ulimit is set it can only be lowered never raised.

There are indeed soft (displayable with ulimit -s or ulimit -Ss) and hard (displayable with ulimit -Hs) limits. But while setting the limit through ulimit -s will affect soft and hard values.

Once hard limit is set it can only be lowered never raise, but soft limit can be lowered or raised provided that the value stays lower than the hard limit.

This will work:

# base values
$ ulimit -s
100
$ ulimit -Hs
100
$ ulimit -Ss
100
# lower soft limit only
$ ulimit -Ss 50
$ ulimit -s
50
$ ulimit -Hs
100
$ ulimit -Ss
50
# raise soft limit only
$ ulimit -Ss 100
$ ulimit -s
100
$ ulimit -Hs
100
$ ulimit -Ss
100
# lower soft and hard limit
$ ulimit -s 50
$ ulimit -s
50
$ ulimit -Hs
50
$ ulimit -Ss
50
# then impossible to raise soft limit due to hard limit
$ ulimit -s 100
-bash: ulimit: stack size: cannot modify limit: Operation not permitted
$ ulimit -Ss 100
-bash: ulimit: stack size: cannot modify limit: Invalid argument
Heliopolis answered 16/3, 2016 at 10:42 Comment(2)
in bash you can't increase hard limit as you say, but in zsh you can increase it, just not greater then original hard limit, e.g suppose your hard limit is X, you can decrease it to Y, run smth (e.g second copy of zsh) and then increase it back to X. But second copy will not be able to exceed YPunch
In addition, some apps/services ship in the way that they are not capable to change soft limit to higher value even if access doesn't block that. It's safer to think that soft limit can be the actual limit for your process. Stack is applied per process basis, the only one that is applied to user/session is nproc from ulimit params list.Butterfly
M
1

The system default stack size varies from different version of kernel to kernel. My 10.7 is 16384, such that ulimit -s 16384 is accepted by my Mac. You can try sysctl kern.stack_size and it shows the read-only stack size. mine is 16384.
You can see this technical article, http://developer.apple.com/library/mac/#qa/qa1419/_index.html, to see how to change the default stack size for C program. For Ruby, because it's a scripting language, you have to enlarge its stack size during linking Ruby interpreter. Excepting for having very deep function calls or recursion, or having very large array and objects being allocated in the stack, your program should not have huge stack space. Instead, using heap or dynamically allocation can use up to 2GB of RAM as you wish.

Misogamy answered 6/11, 2012 at 5:58 Comment(3)
I wonder also why it must be done during link time, not execution time, and if Ruby actually creates a new thread with a stack size to run the Ruby program, then maybe Ruby can set a stack size by using a command line ruby --stack-size 16384 foo.rbDimorph
Yes. My OS accepts ulimit -s 32767 (I think the default for ulimit is unlimited, but the OS kernel has the default size). But once if you set the value, you can not set a value which is larger than previous. Otherwise, error message "operation not permitted" shows.Misogamy
The default stack size set in linking time is reasonable because when OS loads the executable, kernel must get everything prepared before jump into the program. The linking time option marks the stack size in the Mach-O executable file format, and OS/Kernel can see the option to make different stack size for the executable environment. Ruby can create different stack size for its new threads, but the first and default stack to run ruby itself is determine by the OS and the linking time option.Misogamy
M
1

I found that using /bin/zsh instead of /bin/sh made this error go away.

For me, the error was occurring in a shell script that called ulimit -s unlimited. When the script was interpreted by /bin/sh (i.e., had #!/bin/sh as the first line of the script file), it barfed with this error. In contrast, when changing it to use zsh, everything seemed to work fine. zsh was smart enough to interpret unlimited as "give me the largest limit the operating system will let me have", and everything worked as you'd want it to.

Mensch answered 29/10, 2013 at 1:57 Comment(2)
What you say seems bizarre. Are you sure?Ddt
@DavidJames, it does seem bizarre to me too, and I have no explanation for why that should be, so this answer could be entirely wrong. I no longer remember how to reproduce this or in what context I ran into this, so nope, I'm not sure. Sorry that this isn't very helpful.Mensch
S
0

All the limits that the builtin ulimit controls are actually implemented in the OS kernel and as such you should see the C interface documentation for the whole system. Here's Apple documentation for setrlimit(): https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/setrlimit.2.html

(Note that the path of that document seems to say iPhoneOS but the content still speaks about "Mac OS X". If you have suitable documentation installed locally, running man setrlimit in your terminal should emit the up-to-date documentation.)

Newly created processes inherit the limits from the fork() parent or the previous process that executes exec().

Sexology answered 23/8, 2021 at 7:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.