Maximum clients reached on Heroku and Redistogo Nano
Asked Answered
E

1

9

I am using celerybeat on Heroku with RedisToGo Nano addon

There is one web dyno and one worker dyno

The celerybeat worker is set to perform a task every minute.

The problem is: Whenever I deploy a new commit, dynos restart, and I get this error

2014-02-27T13:19:31.552352+00:00 app[worker.1]: Traceback (most recent call last):
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 389, in start
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     self.reset_connection()
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 727, in reset_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     self.connection = self._open_connection()
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/celery/worker/consumer.py", line 792, in _open_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     callback=self.maybe_shutdown)
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     self.on_connect()
2014-02-27T13:18:23.864287+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/redis/connection.py", line 263, in on_connect
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     if nativestr(self.read_response()) != 'OK':
2014-02-27T13:18:23.864287+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/redis/connection.py", line 314, in read_response
2014-02-27T13:18:23.864287+00:00 app[worker.1]:     raise response
2014-02-27T13:18:23.864287+00:00 app[worker.1]: ResponseError: max number of clients reached
2014-02-27T13:19:31.552352+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 272, in ensure_connection
2014-02-27T13:19:31.552352+00:00 app[worker.1]:     interval_start, interval_step, interval_max, callback)
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/utils/__init__.py", line 218, in retry_over_time
2014-02-27T13:19:31.552591+00:00 app[worker.1]:     return fun(*args, **kwargs)
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 162, in connect
2014-02-27T13:19:31.552591+00:00 app[worker.1]:     return self.connection
2014-02-27T13:19:31.552591+00:00 app[worker.1]:   File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 617, in connection
2014-02-27T13:18:23.870811+00:00 app[worker.1]: [2014-02-27 13:18:23,870: ERROR/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...

and those logs go on endlessly. till I stop both dynos and restart them.

It has become a problem because it happens almost every time a new commit is deployed.

Any ideas why this is happening and how to solve this?

Enemy answered 27/2, 2014 at 13:32 Comment(0)
A
10

The nano redistogo plan caps concurrent redis connections at 10.

The number of redis connects used will vary based on your front-end and celery worker settings. It sounds like you are using >= 5 redis connections for your production stack.

When you deploy new code, Heroku spins up an entirely new stack. This means you are using >= 10 redis connections at the time of deploy.

There are two ways to fix this:

  • Increase the maximum number of redistogo connections allowed, by upgrading to a larger plan ($$$)
  • Decrease the number of used connections for your stack (decrease celery concurrency or redis connections used by your web worker)

This is a simple matter of resource exhaustion. I would just pay for a larger RedisToGo plan.

Alchemist answered 27/2, 2014 at 16:43 Comment(3)
+1 @Alchemist is probably correct, you can issue the info command to get the number of currently connected_clients that way you can check against your current instance's limit. the connection you use to issue info also counts too. I work at redistogo by the way just email [email protected] if you need any help troubleshooting your instance.Photoflood
Is there a way to have redis/heroku close any connections on the old stack at deploy?Lichenin
The problem is timing - your new dynos are coming up while the old ones are going down. Especially if you use the Preboot for zero downtime deploys. The only way to guarantee all the old connections and resources are freed before the new ones are used would be to take your whole site down during deploys - and I doubt you want that.Alchemist

© 2022 - 2024 — McMap. All rights reserved.