JOB_TOO_BIG Pheanstalk - what can be done?
Asked Answered
L

2

7

On Laravel 4.2 & Laravel Forge

I Made a mistake and accidentally pushed some code on to the production sever, but there was a bug and it pushed a job to the queue without deleting it once done. Now I can't push anything in the queue anymore, I get:

Pheanstalk_Exception JOB_TOO_BIG: job data exceeds server-enforced limit

What can I do?

Live answered 22/3, 2015 at 20:12 Comment(0)
S
9

This is because you're trying to store too much data in the queue itself. Try to cut down the data you're pushing to the queue.

For example if your queue job involves using models, just pass the model ID into the queue and as part of the job fetch them from the database, rather than passing the queue the entire model instance.

If you're using eloquent models, they're automatically handled in this way.

Seleucid answered 23/3, 2015 at 8:21 Comment(3)
Thanks man, is there any reason it used to work fine but now it doesn't anymore? Never had this issue previously until I forgot to delete a job. Since then it keeps crashing everytime a queue has a bit of datas. - Will try passing just the id to try also but this is weird....Live
@Live sorry, really late reply, just got a notification on this post. The amount of data you were storing previously didn't hit the data limit per task, whereas now what you're trying to push to the queue does. This may be more columns in your database, etc.Seleucid
I was reading some code and wondered why we were passing the id's instead of the model object. Well this clears it up pretty well!Ytterbium
P
33

You can increase the max job size with the -z option for Beanstalkd: http://linux.die.net/man/1/beanstalkd

To do this on Forge you need to SSH into the server and edit the /etc/default/beanstalkd file.

Add the following line (or uncomment the existing BEANSTALKD_EXTRA line and edit it): BEANSTALKD_EXTRA="-z 524280"

Restart beanstalkd after making the change: sudo service beanstalkd restart

The size should be specified in bytes.

I am not sure if this could have serious performance effects - so far, so good for me. I would appreciate any comments on performance.

Protist answered 8/9, 2015 at 5:34 Comment(1)
This answer helped me twice… two years apart.Spat
S
9

This is because you're trying to store too much data in the queue itself. Try to cut down the data you're pushing to the queue.

For example if your queue job involves using models, just pass the model ID into the queue and as part of the job fetch them from the database, rather than passing the queue the entire model instance.

If you're using eloquent models, they're automatically handled in this way.

Seleucid answered 23/3, 2015 at 8:21 Comment(3)
Thanks man, is there any reason it used to work fine but now it doesn't anymore? Never had this issue previously until I forgot to delete a job. Since then it keeps crashing everytime a queue has a bit of datas. - Will try passing just the id to try also but this is weird....Live
@Live sorry, really late reply, just got a notification on this post. The amount of data you were storing previously didn't hit the data limit per task, whereas now what you're trying to push to the queue does. This may be more columns in your database, etc.Seleucid
I was reading some code and wondered why we were passing the id's instead of the model object. Well this clears it up pretty well!Ytterbium

© 2022 - 2024 — McMap. All rights reserved.