I am trying to figure out a situation where PHP is not consuming a lot of memory but instead causes a very high Committed_AS result.

Take this munin memory report for example:

Munin Memory

我一开始排队(10~30名工人),坚定的记忆就铺天盖地.我们在这个vps实例上有2G mem+2G交换,到目前为止,大约有6亿未使用的内存(约30%可用).

If I understand Committed_AS correctly, it's meant to be a 99.9% guarantee no out of memory issue given current workload, and it seems to suggest we need to triple our vps memory just to be safe.

I tried to reduce the number of queues from 30 to around 10, but as you can see the green line is quite high.

As for the setup: Laravel 4.1 with PHP 5.5 opcache enabled. The upstart script we use spawn instance like following:

instance $N

exec start-stop-daemon --start --make-pidfile --pidfile /var/run/laravel_queue.$N.pid --chuid $USER --chdir $HOME --exec /usr/bin/php artisan queue:listen -- --queue=$N --timeout=60 --delay=120 --sleep=30 --memory=32 --tries=3 >> /var/log/laravel_queue.$N.log 2>&1

I have seen a lot of cases when high swap use imply insufficient memory, but our swap usage is low, so I am not sure what troubleshooting step is appropriate here.

PS:在Laravel 4.1和我们的vps升级之前,我们没有这个问题,下面的图片可以证明这一点.

老穆宁记忆

Maybe I should rephrase my question as: how are Committed_AS calculated exactly and how does PHP factor into it?


Updated 2014.1.29:

I had a theory on this problem: since laravel queue worker actually use PHP sleep() when waiting for new job from queue (in my case beanstalkd), it would suggest the high Committed_AS estimation is due to the relatively low workload and relatively high memory consumption.

This make sense as I see Committed_AS ~= avg. memory usage / avg. workload. As PHP sleep() properly, little to no CPU are used; yet whatever memory it consumes is still reserved. Which result in server thinking: hey, you use so much memory (on average) even when load is minimal (on average), you should be better prepared for higher load (but in this case, higher load doesn't result in higher memory footprint)

If anyone would like to test this theory, I will be happy to award the bounty to them.

推荐答案

我最近找到了这个高提交内存问题的根本原因:PHP 5.5 OPcache settings.

结果表明,输入opcache.memory_consumption = 256会导致每个PHP进程保留更多的虚拟内存(可以在top命令的VIRT列中看到),从而导致Munin估计潜在的提交内存要高得多.

我们在后台运行的LARLAVEL队列的数量只会夸大问题.

By putting opcache.memory_consumption to the recommended 128MB (we really weren't using all those 256MB effectively), we have cutted the estimating value in half, coupled with recent RAM upgrade on our server, the estimation is at around 3GB, much more reasonable and within our total RAM limit

Laravel相关问答推荐

发送邮箱后,Laravel重定向偶尔会导致错误500

我是否需要/是否可以从控制器运行LARLAVEL备份?

Uncaught ReferenceError:未定义require[Shopify PHP React Browser问题]

Laravel复选框值更新

LARAVEL 从多个表中获取重复的行

Laravel 9 上的数组差异助手

如何在 Laravel 中将 Select 选项表单插入数据库

基于值laravel提交表单后如何返回特定页面

RuntimeException 未安装 Zip PHP 扩展

laravel Eloquent 模型更新事件未触发

Laravel Valet 安装后 Ping test.dev 返回未知主机

基于数据表 Laravel 4 导出 CSV 或 PDF

Laravel Blade - 产生内部部分

Laravel Request::input 调用未定义的方法

在 Laravel 5 中扩展请求类

PHP 5.5,什么情况下PHP会导致很高的committed memory

413请求实体在laravel homestead for windows中太大的nginx服务器

如何在 laravel 中解码 Json 对象并在 laravel 中应用 foreach 循环

Mac OS X 需要 Mcrypt PHP 扩展

Laravel s3 多桶