we are currently using Redis to Go with our Heroku-hosted Python application.
We use Redis with python-rq purely as a task queue to provide for a delayed execution of some time-intense tasks. A task is retrieving some data from a PostgreSQL database and is writing the results back to it - thus no valuable data is saved at all in the Redis Instance. We notice that, depending on the amount of jobs executed, Redis is consuming more and more memory (growth @ ~10 MB/hour). A FLUSHDB command on the CLI fixes this (takes it down to ~700kB of RAM used) until RAM is full again.
According to our (unchanged standard) settings, a job result is kept for 500 seconds. Over time, some jobs of course fail, and they are moved to the failed queue.
- What do we have to do differently to get our tasks done with a stable amount of RAM?
- Where does the RAM consumption come from?
- Can I turn off persistence at all?
- From the docs I know that the 500 sec TTL means that a key is then "expired", but not really deleted. Does the key still consume memory at this point? Can I somehow change this behavior?
- Does it have something to do with the failed queue (which apparently does not have a TTL attached to the jobs, meaning (I think) that these are kept forever)?
- Just curious: When using RQ purely as a queue, what is saved in the Redis DB? Is it actual executable code or just a reference to where the function to be executed can be found?
Sorry for the pretty noobish questions, but I'm new to the topic of queuing stuff and after researching for 2+ days I've reached a point where I don't know that to do next. Thanks, KH