Cache Invalidation — Is there a General Solution?
Asked Answered
T

9

133

"There are only two hard problems in Computer Science: cache invalidation and naming things."

Phil Karlton

Is there a general solution or method to invalidating a cache; to know when an entry is stale, so you are guaranteed to always get fresh data?

For example, consider a function getData() that gets data from a file. It caches it based on the last modified time of the file, which it checks every time it's called.
Then you add a second function transformData() which transforms the data, and caches its result for next time the function is called. It has no knowledge of the file - how do you add the dependency that if the file is changed, this cache becomes invalid?

You could call getData() every time transformData() is called and compare it with the value that was used to build the cache, but that could end up being very costly.

Tabling answered 27/7, 2009 at 14:45 Comment(6)
I believe he's something to do with writing X WindowsTabling
I think that title would be better as "Cache Invalidation -- Is there a General Solution?" as it refers to a specific class of caching problem.Hugohugon
No, he didn't know much computer science. I'm sure that his involvement in creating OpenGL, X11, and SSLv3 made him too busy to really study it much. :-)Disguise
There are only 2 hard problems in computer science: Cache invalidation. Naming things. And off-by-one errors.Lytta
I once heard this as "The two hardest things in Computer Science are cache invalidation, naming things, and off-by-one errors."Eclectic
Yeah, we added a new one :D twitter.com/julia_asterisk/status/829045121933000708 This happened sometime since 2014.Fugacity
E
59

What you are talking about is lifetime dependency chaining, that one thing is dependent on another which can be modified outside of it's control.

If you have an idempotent function from a, b to c where, if a and b are the same then c is the same but the cost of checking b is high then you either:

  1. accept that you sometime operate with out of date information and do not always check b
  2. do your level best to make checking b as fast as possible

You cannot have your cake and eat it...

If you can layer an additional cache based on a over the top then this affects the initial problem not one bit. If you chose 1 then you have whatever freedom you gave yourself and can thus cache more but must remember to consider the validity of the cached value of b. If you chose 2 you must still check b every time but can fall back on the cache for a if b checks out.

If you layer caches you must consider whether you have violated the 'rules' of the system as a result of the combined behaviour.

If you know that a always has validity if b does then you can arrange your cache like so (pseudocode):

private map<b,map<a,c>> cache // 
private func realFunction    // (a,b) -> c

get(a, b) 
{
    c result;
    map<a,c> endCache;
    if (cache[b] expired or not present)
    {
        remove all b -> * entries in cache;   
        endCache = new map<a,c>();      
        add to cache b -> endCache;
    }
    else
    {
        endCache = cache[b];     
    }
    if (endCache[a] not present)     // important line
    {
        result = realFunction(a,b); 
        endCache[a] = result;
    }
    else   
    {
        result = endCache[a];
    }
    return result;
}

Obviously successive layering (say x) is trivial so long as, at each stage the validity of the newly added input matches the a:b relationship for x:b and x:a.

However it is quite possible that you could get three inputs whose validity was entirely independent (or was cyclic), so no layering would be possible. This would mean the line marked // important would have to change to

if (endCache[a] expired or not present)

Emmet answered 27/7, 2009 at 15:7 Comment(1)
or maybe, if the cost of checking b is high, you use pubsub so that when b changes it notifies c. The Observer pattern is common.Coacervate
L
16

The problem in cache invalidation is that stuff changes without us knowing about it. So, in some cases, a solution is possible if there is some other thing that does know about it and can notify us. In the given example, the getData function could hook into the file system, which does know about all changes to files, regardless of what process changes the file, and this component in turn could notify the component that transforms the data.

I don't think there is any general magic fix to make the problem go away. But in many practical cases there may very well be opportunities to transform a "polling"-based approach into an "interrupt"-based one, which can make the problem simply go away.

Lytta answered 28/2, 2011 at 16:50 Comment(0)
J
6

IMHO, Functional Reactive Programming (FRP) is in a sense a general way to solve cache invalidation.

Here is why: stale data in FRP terminology is called a glitch. One of FRP's goals is to guarantee absence of glitches.

FRP is explained in more detail in this 'Essence of FRP' talk and in this SO answer.

In the talk the Cells represent a cached Object/Entity and a Cell is refreshed if one of it's dependency is refreshed.

FRP hides the plumbing code associated with the dependency graph and makes sure that there are no stale Cells.


Another way (different from FRP) that I can think of is wrapping the computed value (of type b) into some kind of a writer Monad Writer (Set (uuid)) b where Set (uuid) (Haskell notation) contains all the identifiers of the mutable values on which the computed value b depends. So, uuid is some kind of a unique identifier that identifies the mutable value/variable (say a row in a database) on which the computed b depends.

Combine this idea with combinators that operate on this kind of writer Monad and that might lead to some kind of a general cache invalidation solution if you only use these combinators to calculate a new b. Such combinators (say a special version of filter) take Writer monads and (uuid, a)-s as inputs, where a is a mutable data/variable, identified by uuid.

So every time you change the "original" data (uuid, a) (say the normalized data in a database from which b was computed) on which the computed value of type b depends then you can invalidate the cache that contains b if you mutate any value a on which the computed b value depends, because based on the Set (uuid) in the Writer Monad you can tell when this happens.

So anytime you mutate something with a given uuid, you broadcast this mutation to all the cache-s and they invalidate the values b that depend on the mutable value identified with said uuid because the Writer monad in which the b is wrapped can tell if that b depends on said uuid or not.

Of course, this only pays off if you read much more often than you write.


A third, practical, approach is to use materialized view-s in databases and use them as cache-es. AFAIK they also aim to solve the invalidation problem. This of course limits the operations that connect the mutable data to the derived data.

Jackal answered 11/8, 2016 at 7:29 Comment(0)
R
2

I'm working on an approach right now based on PostSharp and memoizing functions. I've run it past my mentor, and he agrees that it's a good implementation of caching in a content-agnostic way.

Every function can be marked with an attribute that specifies its expiry period. Each function marked in this way is memoized and the result is stored into the cache, with a hash of the function call and parameters used as the key. I'm using Velocity for the backend, which handles distribution of the cache data.

Riboflavin answered 27/7, 2009 at 14:54 Comment(0)
B
2

If you're going to getData() every time you do the transform, then you've eliminated the entire benefit of the cache.

For your example, it seems like a solution would be for when you generate the transformed data, to also store the filename and last modified time of the file the data was generated from (you already stored this in whatever data structure was returned by getData(), so you just copy that record into the data structure returned by transformData()) and then when you call transformData() again, check the last modified time of the file.

Binetta answered 27/7, 2009 at 15:0 Comment(0)
L
1

Is there a general solution or method to creating a cache, to know when an entry is stale, so you are guaranteed to always get fresh data?

No, because all data is different. Some data may be "stale" after a minute, some after an hour, and some may be fine for days or months.

Regarding your specific example, the simplest solution is to have a 'cache checking' function for files, which you call from both getData and transformData.

Lingo answered 27/7, 2009 at 14:59 Comment(0)
E
1

There is no general solution but:

  • You cache can act as a proxy (pull). Assume your cache knows the last origin change's timestamp, when someone call getData(), the cache ask the origin for it's last change's timestamp, if the same, it returns the cache, otherwise it updates its content with the source one and return its content. (A variation is the client to directly send the timestamp on the request, the source would only return content if its timestamp is different.)

  • You can still use a notification process (push), the cache observe the source, if the source changes, it sends a notification to the cache which is then flagged as "dirty". If someone calls getData() the cache will first get updated to the source, remove the "dirty" flag; then return its content.

The choice generally speaking depends on:

  • The frequency: many calls on getData() would prefer a push so to avoid the source to be flooded by a getTimestamp function
  • Your access to the source: Are you owning the source model ? If not, chances are you cannot add any notification process.

Note: As using the timestamp is the traditional way http proxies are working, another approach is sharing a hash of the content stored. The only way I know for 2 entities to get updated together are either I call you (pull) or you call me… (push) that's all.

Excusatory answered 22/6, 2015 at 13:15 Comment(0)
P
0

cache is hard because you need to consider: 1)the cache is multiple nodes,need consensus for them 2)invalidation time 3)race condition when multple get/set happen

this is good reading: https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/

Palumbo answered 14/5, 2018 at 3:58 Comment(0)
P
-2

Perhaps cache-oblivious algorithms would be the most general (Or at least, less hardware configuration dependent), since they'll use the fastest cache first and move on from there. Here's a MIT lecture on it: Cache Oblivious Algorithms

Putman answered 27/7, 2009 at 14:50 Comment(1)
I think that he's not talking about hardware caches - he's talking about his getData() code having a feature that "caches" the data he got from a file into memory.Binetta

© 2022 - 2024 — McMap. All rights reserved.