How to get a stack trace on all running ruby threads on passenger
Asked Answered
P

2

6

I have a production ruby sinatra app running on nginx/passenger, and I frequently see requests get inexplicably stalled. I wrote a script to call passenger-status on my cluster of machines every ten seconds and plot the results on a graph. This is what I see:

enter image description here

The blue line shows the global queue waiting spiking constantly to 60. This is an average across 4 machines, so when the blue line hits 60, it means every machine is maxed out. I have the current passenger_max_pool_size set to 20, so it's getting to 3x the max pool size, and then presumably dropping subsequent requests.

My app depends on two key external resources - an Amazon RDS mysql backend and a Redis instance. Perhaps one of these is periodically becoming slow or unresponsive and thereby causing this behavior?

Can anyone advise me on how to get a stack trace to see if the bottleneck here is Amazon RDS, Redis, or something else?

Thanks!

Pb answered 31/1, 2011 at 23:30 Comment(0)
P
4

I figured it out -- I had a SAVE config parameter in Redis that was firing once a minute. Evidently the forking/saving operations of redis are blocking for my app. I change the config param to be "3600 1", meaning I only save my database once an hour, which is OK because I am using it as a cache (data persisted in MYSQL).

Pb answered 31/1, 2011 at 23:43 Comment(2)
Can I know how long was the waiting? AFAIK, the auto-save should be in background and should only delay while it's copying the memory pages, which should be 300ms top.Cavite
Empirically, it would appear that my redis node would block for several seconds, I would say I saw around 5 -10 seconds of blocking in any given 60-sec period. I am going to try to spin up a slave and use the slave to save.Pb
A
0

To answer your original question, it is possible to get "all stack traces" for the running ruby processes that passenger is shepherding. Basically send SIGQUIT message to each one, and they'll spit out all their backtraces into the apache/nginx log file, ex:

https://gist.github.com/rdp/905759f88134229c2969b9f242188615

Adiel answered 1/4, 2016 at 12:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.