PHP 5.5, under what circumstances will PHP cause very high committed memory
Asked Answered
M

3

17

I am trying to figure out a situation where PHP is not consuming a lot of memory but instead causes a very high Committed_AS result.

Take this munin memory report for example:

munin memory

As soon as I kick off our Laravel queue (10 ~ 30 workers), committed memory goes through the roof. We have 2G mem + 2G swap on this vps instance and so far there are about 600M unused memory (that's about 30% free).

If I understand Committed_AS correctly, it's meant to be a 99.9% guarantee no out of memory issue given current workload, and it seems to suggest we need to triple our vps memory just to be safe.

I tried to reduce the number of queues from 30 to around 10, but as you can see the green line is quite high.

As for the setup: Laravel 4.1 with PHP 5.5 opcache enabled. The upstart script we use spawn instance like following:

instance $N

exec start-stop-daemon --start --make-pidfile --pidfile /var/run/laravel_queue.$N.pid --chuid $USER --chdir $HOME --exec /usr/bin/php artisan queue:listen -- --queue=$N --timeout=60 --delay=120 --sleep=30 --memory=32 --tries=3 >> /var/log/laravel_queue.$N.log 2>&1

I have seen a lot of cases when high swap use imply insufficient memory, but our swap usage is low, so I am not sure what troubleshooting step is appropriate here.

PS: we don't have this problem prior to Laravel 4.1 and our vps upgrade, here is an image to prove that.

old munin memory

Maybe I should rephrase my question as: how are Committed_AS calculated exactly and how does PHP factor into it?


Updated 2014.1.29:

I had a theory on this problem: since laravel queue worker actually use PHP sleep() when waiting for new job from queue (in my case beanstalkd), it would suggest the high Committed_AS estimation is due to the relatively low workload and relatively high memory consumption.

This make sense as I see Committed_AS ~= avg. memory usage / avg. workload. As PHP sleep() properly, little to no CPU are used; yet whatever memory it consumes is still reserved. Which result in server thinking: hey, you use so much memory (on average) even when load is minimal (on average), you should be better prepared for higher load (but in this case, higher load doesn't result in higher memory footprint)

If anyone would like to test this theory, I will be happy to award the bounty to them.

Motorcade answered 25/1, 2014 at 17:25 Comment(8)
You might be interested in this talk from last year's PHP UK conference which explains some parts of how memory is managed in PHP.Stearoptene
@Stearoptene that's a great talk but it doesn't really point me in the right direction yet because i am already using a heavy php framework, and need to figure out why it cause high Committed_AS instead of memory exhaustion or more i/o on swap.Motorcade
Yeah, I just thought it might give you, or someone, some useful background for explaining the specific situation in hand. I myself am not enough of an expert in either memory management or Laravel to diagnose the specific situation.Stearoptene
Very interesting question for me, since I have been running into pretty similar issues recently, but with PHP 5.3 and a home-made framework, without any explanation so far. Your graph implies you have upgraded from 1GB to 2GB of RAM, but what's the size of your swap partition ?Blintz
@Blintz 2GB swap. if swap is used it would be shown as red on the chart.Motorcade
If I were you, I would extend the bounty period.Picture
@KarmicDice ah, I wasn't online recently, didn't know there is a way to extend bounty, any remedy for this one?Motorcade
I haven't tested the theory but, just an assumption since L4.1 has made substantial changes for queuing jobs...Picture
M
1

I have recently found the root cause to this high committed memory problem: PHP 5.5 OPcache settings.

Turns out putting opcache.memory_consumption = 256 cause each PHP process to reserve much more virtual memory (can be seen at VIRT column in your top command), thus result in Munin estimating the potential committed memory to be much higher.

The number of laravel queues we have running in background only exaggerate the problem.

By putting opcache.memory_consumption to the recommended 128MB (we really weren't using all those 256MB effectively), we have cutted the estimating value in half, coupled with recent RAM upgrade on our server, the estimation is at around 3GB, much more reasonable and within our total RAM limit

Motorcade answered 2/5, 2014 at 5:53 Comment(1)
I should note that bumping into this older questions points me to the right direction: #10354068Motorcade
L
2

Two things you need to understand about Committed_AS,

  1. It is an estimate
  2. It alludes how much memory you would need in a worst case scenario (plus the swap). It is dependent on your server workload at the time. If you have a lower workload then the Committed_AS will be lower and vice versa.

If this wasn't an issue with the prior iteration of the framework queue and provided you haven't pushed any new code changes to the production environment, then you will want to compare the two iterations. Maybe spin up another box and run some tests. You can also profile the application with xdebug or zend_debugger to discover possible causal factors with the code itself. Another useful tool is strace.

All the best, you're going to need it!

Lucindalucine answered 27/1, 2014 at 18:58 Comment(0)
M
1

I have recently found the root cause to this high committed memory problem: PHP 5.5 OPcache settings.

Turns out putting opcache.memory_consumption = 256 cause each PHP process to reserve much more virtual memory (can be seen at VIRT column in your top command), thus result in Munin estimating the potential committed memory to be much higher.

The number of laravel queues we have running in background only exaggerate the problem.

By putting opcache.memory_consumption to the recommended 128MB (we really weren't using all those 256MB effectively), we have cutted the estimating value in half, coupled with recent RAM upgrade on our server, the estimation is at around 3GB, much more reasonable and within our total RAM limit

Motorcade answered 2/5, 2014 at 5:53 Comment(1)
I should note that bumping into this older questions points me to the right direction: #10354068Motorcade
R
0

Committed_AS is the actual size that the kernel has actually promised to processes. And queues runs independently and has nothing to do with PHP or Laravel. In addition to what Rijndael said, I recommend installing New Relic which can be used to find out the problem.

Tip: I've noticed a huge reduction in server-load with NginX-HHVM combination. Give it a try.

Randolf answered 1/2, 2014 at 14:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.