I've got a Flask app that's deployed (using a Dockerfile) to Google Cloud Run. The app's structure closely resembles the Flask Mega Tutorial. It uses a Postgres database that runs on Cloud SQL.
The app needs to process background tasks. It seems like Celery or Redis Queue are the most common ways to go. I don't want to use Cloud Tasks because it breaks the dev/prod parity rule in the 12-factor app paradigm.
Redis Queue was simple to get up and running on my local machine, but I can't find a best-practices guide anywhere on how to use Redis Queue with a Flask app running on Cloud Run.
I decided to use Google's Memorystore for my Redis instance, but now I'm not sure what the best way to run my Redis workers is. I'd like for these workers to scale up as more tasks are added to the Redis Queue by my Flask server (the way Cloud Run scales up instances when more and more HTTP requests are made). Right now, I'm considering deploying a worker (a copy of my Flask app with the task functions) to App Engine, but that doesn't seem like quite the right solution.
What do people recommend for deploying RQ / celery workers? I'm happy to alter my deployment strategy (and platform) entirely to achieve a simple, scalable architecture that can be easily reproduced in a local dev setup.