httperf segmentation fault error on OS X 10.7.1
Asked Answered
M

3

37

When I try to perform a load test using httperf with high request rate, I get the following error:

» httperf --client=0/1 --server=www.xxxxx.com --port=80 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --rate=30
httperf --client=0/1 --server=staging.truecar.com --port=80 --uri=/ --rate=30 --send-buffer=4096 --recv-buffer=16384 --num-conns=200 --num-calls=1
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
**Segmentation fault: 11**

The error raises when the "rate" is > 15

Versions:

httperf 0.9.0

OS X 10.7.1

Modla answered 8/9, 2011 at 15:58 Comment(5)
I see the same on OSX 10.6.8, with httperf 0.8.1 and 0.9.0Vitovitoria
I see this, even with the rate set > 1. It seems to run a little longer before segfaulting at 2, but 3 segfaults wicked fast.Antilogarithm
Check if you don't run out of memory.Mainstay
Your system is running out of file descriptors. IIRC, this happens with RPM packages using a way too small __FD_SETSIZE (like 1024). Afaik you'll need to recompile the limiting RPM packages (e.g. glibc, Apache, PHP, etc.) to increase __FD_SETSIZE, so I'd suggest migrating the question to Server Fault.Disease
I get this same issue on CentOS 6 x64 running Apache 2.2.15, but not Debian 6 x64 running Nginx 1.2.3, using httperf-0.9.0 on both. Open files limits are the same (1024) on both.Pandit
B
6

As the warning states, the number of connections to the http server is exceeding the maximum number of allowed open file-descriptors. It's likely that even though httperf is limiting the value to FD_SETSIZE, you're reaching beyond that limit.

You can check the limit value with ulimit -a

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

Try increasing the limit with ulimit -n <n>

$ ulimit -n 2048
$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 2048
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

This is common practice on large web servers and the like, as a socket is essentially just an open file-descriptor.

Beestings answered 16/12, 2011 at 23:33 Comment(4)
As @Tom van der Woerdt/@m_pahlevanzadeh pointed out, replace umlimit with limit if you're using csh rather than bash/kshBeestings
Thank you for the tips. But this doesn't solve the segmentation fault, and it is most probably not the root cause of the question. Reading the httperf documentation, it is actually aware of available file descriptors. It will log unavailable file descriptors and output them after one run. The program is not meant to crash if you run out of file descriptors.Fink
Basho has a convenient guide for raising the open file limit with steps for Lion. Basically, add limit maxfiles 16384 32768 to a file called /etc/launchd.conf (create it if missing). Reboot. Check new value with ulimit -a or launchctl limit. I still get a segfault, though.Coerce
Doesn't work. I already have the following set: open files (-n) 640000Coston
P
0

Try to use gdb and use something like:

$ gdb httperf --client=0/1 --server=staging.truecar.com \
--port=80 --uri=/ --rate=30 --send-buffer=4096 \
--recv-buffer=16384 --num-conns=200 --num-calls=1

This will invoke gdb and you should see a (gdb) prompt.

Then: run and enter.

If it'll crash, type bt (backtrace). Investigate and/or share on here.

Pink answered 6/12, 2011 at 0:15 Comment(4)
I have the same problem as the original question. Here is the output of your suggested gdb run: gist.github.com/2990517Fink
IMHO, this could be another case where your system is running out of file descriptors. The other thing could be bad memory management in httperf. You could try to use sysbench instead.Pink
Probably this is a problem inside httperf. Sysbench is no use for me, since I want to test a webserver.Fink
Eventually I ended up using partly siege and mostly tsung.Fink
P
0

ksh and bash are using ulimit, and csh is using the limit command.

Porcine answered 25/12, 2011 at 22:48 Comment(1)
Also use the lsof command to see open files and work this command with the following distro: AIX 5.3, FreeBSD 4.9 for x86-based systems, FreeBSD 7.0 and 8.0 for AMD64-based systems, Linux 2.1.72 and above for x86-based systems and Solaris 9 and 10. @yesterdayPorcine

© 2022 - 2024 — McMap. All rights reserved.