How does Lru_cache (from functools) Work?
Asked Answered
E

3

114

Especially when using recursive code there are massive improvements with lru_cache. I do understand that a cache is a space that stores data that has to be served fast and saves the computer from recomputing.

How does the Python lru_cache from functools work internally?

I'm Looking for a specific answer, does it use dictionaries like the rest of Python? Does it only store the return value?

I know that Python is heavily built on top of dictionaries, however, I couldn't find a specific answer to this question.

Experience answered 17/4, 2018 at 16:29 Comment(0)
M
71

The functools source code is available here: https://github.com/python/cpython/blob/master/Lib/functools.py

lru_cache uses the _lru_cache_wrapper decorator (python decorator with arguments pattern) which has a cache dictionary in context in which it saves the return value of the function called (every decorated function will have its own cache dict). The dictionary key is generated with the _make_key function from the arguments. Added some bold comments below:

# ACCORDING TO PASSED maxsize ARGUMENT _lru_cache_wrapper
# DEFINES AND RETURNS ONE OF wrapper DECORATORS

def _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo):
    # Constants shared by all lru cache instances:
    sentinel = object()      # unique object used to signal cache misses

    cache = {}                                # RESULTS SAVES HERE
    cache_get = cache.get    # bound method to lookup a key or return None

    # ... maxsize is None:

    def wrapper(*args, **kwds):
        # Simple caching without ordering or size limit
        nonlocal hits, misses
        key = make_key(args, kwds, typed)     # BUILD A KEY FROM ARGUMENTS
        result = cache_get(key, sentinel)     # TRYING TO GET PREVIOUS CALLS RESULT
        if result is not sentinel:            # ALREADY CALLED WITH PASSED ARGS
            hits += 1
            return result                     # RETURN SAVED RESULT
                                              # WITHOUT ACTUALLY CALLING FUNCTION
        misses += 1
        result = user_function(*args, **kwds) # FUNCTION CALL - if cache[key] empty
        cache[key] = result                   # SAVE RESULT

        return result
    # ...

    return wrapper
Municipalize answered 17/4, 2018 at 16:46 Comment(6)
fib.cache_info() prints something like this: CacheInfo(hits=28, misses=16, maxsize=None, currsize=16) What is the meaning of a "miss" here?Gimble
@Gimble at the start of decorator this counters are created in context and both equals zero: hits = misses = 0. Each decorated function call increment either one of them - hits += 1 if cache already have data and misses += 1 if data was not in cache. Check source. You can use this counters to analyze usefulness of cachingMunicipalize
So if a number is seen for the first time, misses go up by 1, and for subsequent views for the same number, hits get incremented?Gimble
From my test code, it seems @lru_cache is faster compared to using my own dict for a cache, using cache.get(args, my_function(args) and storing cache[args] = my_res at the end. Why is that?Siler
@Siler lru_cache() is implemented in terms of _lru_cache_wrapper(), which has a C implementation, which is likely to be faster than any Python implementation.Throckmorton
Hey @npdu I know it's been a long time since I've asked this quesiton. I have been getting a lot of upvotes and if you could expand on the answer with a use case example. I have seen that even on google if you type lru_cache it comes as a 2 or 3 result so many people might straight go to this post and they could benefit your post.Experience
P
27

Python 3.9 Source Code for LRU Cache: https://github.com/python/cpython/blob/3.9/Lib/functools.py#L429

Example Fib Code

@lru_cache(maxsize=2)
def fib(n):
    if n == 0:
        return 0
    if n == 1:
        return 1
    return fib(n - 1) + fib(n - 2)

LRU Cache decorator checks for some base cases and then wraps the user function with the wrapper _lru_cache_wrapper. Inside the wrapper, the logic of adding item to the cache, LRU logic i.e adding a new item to the circular queue, remove the item from the circular queue happens.

def lru_cache(maxsize=128, typed=False):
...
    if isinstance(maxsize, int):
        # Negative maxsize is treated as 0
        if maxsize < 0:
            maxsize = 0
    elif callable(maxsize) and isinstance(typed, bool):
        # The user_function was passed in directly via the maxsize argument
        user_function, maxsize = maxsize, 128
        wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
        wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
        return update_wrapper(wrapper, user_function)
    elif maxsize is not None:
        raise TypeError(
         'Expected first argument to be an integer, a callable, or None')

    def decorating_function(user_function):
        wrapper = _lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo)
        wrapper.cache_parameters = lambda : {'maxsize': maxsize, 'typed': typed}
        return update_wrapper(wrapper, user_function)

    return decorating_function

The lru_cache normalizes maxsize(when negative), adds the CacheInfo details, and finally adds the wrapper and updates the decorator docs and other details.

lru_cache_wrapper

  • Lru Cache wrapper has few book keeping variables.

     sentinel = object()          # unique object used to signal cache misses
     make_key = _make_key         # build a key from the function arguments
     PREV, NEXT, KEY, RESULT = 0, 1, 2, 3   # names for the link fields
    
     cache = {}
     hits = misses = 0
     full = False
     cache_get = cache.get    # bound method to lookup a key or return None
     cache_len = cache.__len__  # get cache size without calling len()
     lock = RLock()           # because linkedlist updates aren't threadsafe
     root = []                # root of the circular doubly linked list
     root[:] = [root, root, None, None]     # initialize by pointing to self
    
  • The wrapper acquires the lock before performing any operation.

  • A few important variables - root list contains all the items adhering to maxsize value. The important concept to remember root is self-referencing itself (root[:] = [root, root, None, None]) in the previous (0) and next position (1)

Three high-level checks

  • The first case, when maxsize is 0, that means no cache functionality, the wrapper wraps the user function without any caching ability. The wrapper increments cache miss count and return the result.

     def wrapper(*args, **kwds):
         # No caching -- just a statistics update
         nonlocal misses
         misses += 1
         result = user_function(*args, **kwds)
         return result
    
  • The second case. when maxsize is None. In the section, there is no limit on the number of elements to store in the cache. So the wrapper checks for the key in the cache(dictionary). When the key is present, the wrapper returns the value and updates the cache hit info. And when the key is missing, the wrapper calls the user function with user passed arguments, updates the cache, updates the cache miss info, and returns the result.

     def wrapper(*args, **kwds):
         # Simple caching without ordering or size limit
         nonlocal hits, misses
         key = make_key(args, kwds, typed)
         result = cache_get(key, sentinel)
         if result is not sentinel:
             hits += 1
             return result
         misses += 1
         result = user_function(*args, **kwds)
         cache[key] = result
         return result
    
  • The third case, when maxsize is a default value (128) or user passed integer value. Here is the actual LRU cache implementation. The entire code in the wrapper in a thread-safe way. Before performing any operation, read/write/delete from the cache, the wrapper obtains RLock.

LRU Cache

  • The value in the cache is stored as a list of four items(remember root). The first item is the reference to the previous item, the second item is the reference to the next item, the third item is the key for the particular function call, the fourth item is a result. Here is an actual value for Fibonacci function argument 1 [[[...], [...], 1, 1], [[...], [...], 1, 1], None, None] . [...] means the reference to the self(list).

  • The first check is for the cache hit. If yes, the value in the cache is a list of four values.

     nonlocal root, hits, misses, full
     key = make_key(args, kwds, typed)
     with lock:
         link = cache_get(key)
          if link is not None:
              # Move the link to the front of the circular queue
              print(f'Cache hit for {key}, {root}')
              link_prev, link_next, _key, result = link
              link_prev[NEXT] = link_next
              link_next[PREV] = link_prev
              last = root[PREV]
              last[NEXT] = root[PREV] = link
              link[PREV] = last
              link[NEXT] = root
              hits += 1
              return result
    

    When the item is already in the cache, there is no need to check whether the circular queue is full or pop the item from the cache. Rather change the positions of the items in the circular queue. Since the recently used item is always on the top, the code moves to recent value to the top of the queue and the previous top item becomes next of the current item last[NEXT] = root[PREV] = link and link[PREV] = last and link[NEXT] = root. NEXT and PREV are initialized in the top which points to appropriate positions in the list PREV, NEXT, KEY, RESULT = 0, 1, 2, 3 # names for the link fields. Finally, increment the cache hit info and return the result.

  • When it is cache miss, update the misses info and the code checks for three cases. All three operations happen after obtaining the RLock. Three cases in the source code in the following order - after acquiring the lock key is found in the cache, the cache is full, and the cache can take new items. For demonstration, let's follow the order, when the cache is not full, the cache is full, and the key is available in the cache after acquiring the lock.

When the cache is not full

    ...
    else:
        # Put result in a new link at the front of the queue.
        last = root[PREV]
        link = [last, root, key, result]
        last[NEXT] = root[PREV] = cache[key] = link
        # Use the cache_len bound method instead of the len() function
        # which could potentially be wrapped in an lru_cache itself.
        full = (cache_len() >= maxsize)
  • When the cache is not full, prepare the recent result(link = [last, root, key, result]) to contain the root's previous reference, root, key, and computed result.

  • Then point the recent result(link) to the top of the circular queue(root[PREV] = link), root's previous item's next to point to recent result (last[NEXT]=link), and add the recent result to the cache(cache[key] = link).

  • Finally, check the cache is full(cache_len() >= maxsize and cache_len = cache.__len__ is declared in the top) and set the status to full.

  • For the fib example, when the function receives the first value 1, root is empty and root value is [[...], [...], None, None] and after adding the result to the circular queue, the root value is [[[...], [...], 1, 1], [[...], [...], 1, 1], None, None]. Both the previous and next points to the key 1's result. And for the next value 0, after insertion the root value is

    [[[[...], [...], 1, 1], [...], 0, 0], [[...], [[...], [...], 0, 0], 1, 1], None, None]. Previous is [[[[...], [...], None, None], [...], 1, 1], [[...], [[...], [...], 1, 1], None, None], 0, 0] and the next is [[[[...], [...], 0, 0], [...], None, None], [[...], [[...], [...], None, None], 0, 0], 1, 1]

When the cache is full

    ...
    elif full:
        # Use the old root to store the new key and result.
        oldroot = root
        oldroot[KEY] = key
        oldroot[RESULT] = result
        # Empty the oldest link and make it the new root.
        # Keep a reference to the old key and old result to
        # prevent their ref counts from going to zero during the
        # update. That will prevent potentially arbitrary object
        # clean-up code (i.e. __del__) from running while we're
        # still adjusting the links.
        root = oldroot[NEXT]
        oldkey = root[KEY]
        oldresult = root[RESULT]
        root[KEY] = root[RESULT] = None
        # Now update the cache dictionary.
        del cache[oldkey]
        # Save the potentially reentrant cache[key] assignment
        # for last, after the root and links have been put in
        # a consistent state.
        cache[key] = oldroot
  • When the cache is full, use the root as oldroot(oldroot=root) and update the key and result.
  • Then make the oldroot next item as the new root(root=oldroot[NEXT]), copy the new root key and result (oldkey = root[KEY] and oldresult = root[RESULT]) .
  • Set the new root key and result to None(root[KEY] = root[RESULT] = None).
  • Delete the old key's item from the cache (del cache[oldkey]) and add the calculated result to the cache(cache[key] = oldroot).
  • For the fibonacci example, when the cache is full, and the key is 2, the root value is [[[[...], [...], 1, 1], [...], 0, 0], [[...], [[...], [...], 0, 0], 1, 1], None, None] and the new root at the end of the block is [[[[...], [...], 0, 0], [...], 2, 1], [[...], [[...], [...], 2, 1], 0, 0], None, None] . As you can see key 1 is removed and replaced by key 2.

When key appears in the cache after acquiring the lock.

    if key in cache:
        # Getting here means that this same key was added to the
        # cache while the lock was released.  Since the link
        # update is already done, we need only return the
        # computed result and update the count of misses.
        pass

When the key appears in the cache, after acquiring the lock, another thread may have enqueued the value. So there is nothing much to do, the wrapper returns the result.

Finally, code returns the result. Before executing the cache miss part, the code update cache misses info and calls the make_key function.

Note: I couldn't get the nested list indentation working, hence answer may look a bit less on formatting.

Penicillium answered 9/7, 2020 at 16:20 Comment(1)
Note that the Fibonacci example is basically incorrect here: I did some tests with cache_info() and deduced that the order needs to be switched. Then this answer I found confirms my deduction: https://mcmap.net/q/195639/-how-to-determine-the-required-maxsize-of-lru_cacheBernitabernj
G
12

You can check out the source code here.

Essentially it uses two data structures, a dictionary mapping function parameters to its result, and a linked list to keep track of your function call history.

The cache is essentially implemented using the followings, which is pretty self-explanatory.

cache = {}
cache_get = cache.get
....
make_key = _make_key         # build a key from the function arguments
key = make_key(args, kwds, typed)
result = cache_get(key, sentinel)

The gist of updating the linked list is:

elif full:

    oldroot = root
    oldroot[KEY] = key
    oldroot[RESULT] = result

    # update the linked list to pop out the least recent function call information        
    root = oldroot[NEXT]
    oldkey = root[KEY]
    oldresult = root[RESULT]
    root[KEY] = root[RESULT] = None
    ......                    
Government answered 17/4, 2018 at 16:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.