Python falcon and async operations
Asked Answered
T

4

13

I am writing an API using python3 + falcon combination.

There are lot of places in methods where I can send a reply to a client but because of some heavy code which does DB, i/o operations, etc it has to wait until the heavy part ends.

For example:

class APIHandler:                                                                      
  def on_get(self, req, resp):
    response = "Hello"
    #Some heavy code
    resp.body(response)

I could send "Hello" at the first line of code. What I want is to run the heavy code in a background and send a response regardless of when the heavy part finishes.

Falcon does not have any built-in async capabilities but they mention it can be used with something like gevent. I haven't found any documentation of how to combine those two.

Tiossem answered 4/12, 2014 at 14:55 Comment(0)
V
5

I use Celery for async related works . I don't know about gevent .Take a look at this http://celery.readthedocs.org/en/latest/getting-started/introduction.html

Valdivia answered 10/2, 2015 at 9:3 Comment(2)
I ended up using celery as well.Tiossem
ended up using python-rq.org because celery is just insanely too heavy for falcon kind of applications.Lodged
G
8

Client libraries have varying support for async operations, so the decision often comes down to which async approach is best supported by your particular backend client(s), combined with which WSGI server you would like to use. See also below for some of the more common options...

For libraries that do not support an async interaction model, either natively or via some kind of subclassing mechanism, tasks can be delegated to a thread pool. And for especially long-running tasks (i.e., on the order of several seconds or minutes), Celery's not a bad choice.

A brief survey of some of the more common async options for WSGI (and Falcon) apps:

  • Twisted. Favors an explicit asynchronous style, and is probably the most mature option. For integrating with a WSGI framework like Falcon, there's twisted.web.wsgi and crochet.
  • asyncio. Borrows many ideas from Twisted, but takes advantage of Python 3 language features to provide a cleaner interface. Long-term, this is probably the cleanest option, but necessitates an evolution of the WSGI interface (see also pulsar's extension to PEP-3333 as one possible approach). The asyncio ecosystem is relatively young at the time of this writing; the community is still experimenting with a wide variety of approaches around interfaces, patterns and tooling.
  • eventlet. Favors an implicit style that seeks to make async code look synchronous. One way eventlet does this is by monkey-patching I/O modules in the standard library. Some people don't like this approach because it masks the asynchronous mechanism, making edge cases harder to debug.
  • gevent. Similar to eventlet, albeit a bit more modern. Both uWSGI and Gunicorn support gevent worker types that monkey-patch the standard library.

Finally, it may be possible to extend Falcon to natively support twisted.web or asyncio (ala aiohttp), but I don't think anyone's tried it yet.

Gallardo answered 11/6, 2015 at 20:37 Comment(0)
V
5

I use Celery for async related works . I don't know about gevent .Take a look at this http://celery.readthedocs.org/en/latest/getting-started/introduction.html

Valdivia answered 10/2, 2015 at 9:3 Comment(2)
I ended up using celery as well.Tiossem
ended up using python-rq.org because celery is just insanely too heavy for falcon kind of applications.Lodged
N
3

I think there are two different approaches here:

  1. A task manager (like Celery)
  2. An async implementation (like gevent)

What you achieve with each of them is different. With Celery, what you can do is to run all the code you need to compute the response synchronously, and then run in the background any other operation (like saving to logs). This way, the response should be faster.

With gevent, what you achieve, is to run in parallel different instances of your handler. So, if you have a single request, you won't see any difference in the response time, but if you have thousands of concurrent requests, the performance will be much better. The reason for this, is that without gevent, when your code executes an IO operation, it blocks the execution of that process, while with gevent, the CPU can go on executing other requests while the IO operation waits.

Setting up gevent is much easier than setting up Celery. If you're using gunicorn, you simply install gevent and change the worker type to gevent. Another advantage is that you can parallelize any operation that is required in the response (like extracting the response from a database). In Celery, you can't use the output of the Celery task in your response.

What I would recommend, is to start by using gevent, and consider to add Celery later (and have both of them) if:

  • The output of the task you will process with Celery is not required in the response
  • You have a different machine for your celery tasks, or the usage of your server has some peaks and some idle time (if your server is at 100% the whole time, you won't get anything good from using Celery)
  • The amount of work that your Celery tasks will do, are worth the overhead of using Celery
Neolith answered 21/12, 2015 at 16:23 Comment(0)
H
3

You can use multiprocessing.Process with deamon=True to run a daemonic process and return a response to the caller immediately:

from multiprocessing import Process

class APIHandler:

  def on_get(self, req, resp):
    heavy_process = Process(  # Create a daemonic process
        target=my_func,
        daemon=True
    )
    heavy_process.start()
    resp.body = "Quick response"


# Define some heavy function
def my_func():
    time.sleep(10)
    print("Process finished")

You can test it by sending a GET request. You will get a response immediately and, after 10s you will see a printed message in the console.

Hamburg answered 27/3, 2020 at 15:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.