The Situation
我使用Laravel队列来处理大量的媒体文件,一个单独的工作预计需要几分钟(比方说最多一个小时).
I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:
[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log
There are a few oddities that I don't know how to explain or correct:
- My jobs fairly consistently fail after running for 60 to 65 seconds.
- 在被标记为失败之后,作业(job)continues to run即使在被标记为失败之后也是如此.最终,他们确实成功地解决了问题.
- When I run the failed task in isolation to find the cause of the issue it succeeds just fine.
我坚信这是一个超时问题;然而,我的印象是--timeout=0
会导致无限的超时.
The Question个
How can I prevent this temporary "failure" job state? Are there other places where a queue timeout might be invoked that I'm not aware of?