This is how I think it's most easily done, using the built-in lru_cache
and futures:
import asyncio
import functools
# parameterless decorator
def async_lru_cache_decorator(async_function):
@functools.lru_cache
def cached_async_function(*args, **kwargs):
coroutine = async_function(*args, **kwargs)
return asyncio.ensure_future(coroutine)
return cached_async_function
# decorator with options
def async_lru_cache(*lru_cache_args, **lru_cache_kwargs):
def async_lru_cache_decorator(async_function):
@functools.lru_cache(*lru_cache_args, **lru_cache_kwargs)
def cached_async_function(*args, **kwargs):
coroutine = async_function(*args, **kwargs)
return asyncio.ensure_future(coroutine)
return cached_async_function
return async_lru_cache_decorator
@async_lru_cache(maxsize=128)
async def your_async_function(...): ...
This is basically taking your original function and wrapping it so I can store the Coroutine
it returns and convert it into a Future
. This way, this can be treated as a regular function and you can lru_cache
-it as you would usually do it.
Why is wrapping it in a Future necessary? Python coroutines are low level constructs and you can't await
one more than once (You would get RuntimeError: cannot reuse already awaited coroutine
). Futures, on the other hand, are handy and can be awaited consecutively and will return the same result.
One caveat is that caching a Future
will also cache when the original functions raised an Error
. The original lru_cache
does not cache interrupted executions, so watch out for this edge case using the solution above.
Further tweaking can be done to merge both the parameter-less and the parameterized decorators, like the original lru_cache
which supports both usages.
aiohttp.get()
) you have to drive it with something. So cached_request has to be enclosed with@asyncio.coroutine
; it has to be called usingyield from
; and the return statement should be framed along the lines ofreturn (yield from aiohttp.get(url))
– Martymartyn