LRU with Caffeine
Asked Answered
P

1

15

I'm trying to use Caffeine as LRU cache, so entries that were added first will be evicted first. Ran this code:

final Cache<Object, Object> map = Caffeine.newBuilder()
            .maximumSize(10)
            .initialCapacity(10)
            .build();

for (long i=0; i<20;i++) {
        map.put(i, i);
}

map.cleanUp();
System.out.println(map.ge.getAllPresent(map.asMap().keySet()));

Which prints :

{0=0, 1=1, 2=2, 3=3, 4=4, 5=5, 6=6, 7=7, 8=8, 19=19}

But I expected

{10=10, 11=11, 12=12, 13=13, 14=14, 15=15, 16=16, 17=17, 18=18, 19=19}

What am I doing wrong?

Pagano answered 9/1, 2016 at 14:51 Comment(0)
O
19

Caffeine does not implement LRU as its cache eviction policy. Instead, Caffeine uses a policy called TinyLFU. The Caffeine documentation includes a page on Efficiency, which discusses the rationale for this design choice. Quoting that page:

TinyLfu relies on a frequency sketch to probabilistically estimate the historic usage of an entry.

Since Caffeine does not in fact implement LRU, I don't think you can reliably expect it to exhibit strict LRU behavior when you inspect the entries in the cache.

If you absolutely must have LRU behavior, then the JDK standard LinkedHashMap is a good, straightforward choice. You would need to subclass it and override removeEldestEntry with logic to signal when the cache has grown larger than you want it. If multi-threaded usage is necessary, then you'll need to wrap operations with appropriate synchronization.

Caffeine was heavily inspired by the Guava Cache, which similarly provides concurrent access and has approximate LRU behavior. A quick test of your code against the Guava cache shows similar results. I am not aware of any standard library that would provide predictable, externally observable LRU results and true concurrent access without a coarse-grained lock.

You might reconsider whether it's really a requirement to have strict, externally observable LRU results. By its nature, a cache is fast temporary storage to provide optimized lookups. I would not expect program behavior to change drastically dependent on whether the cache implements strict LRU, an approximation of LRU, LFU, or some other eviction policy.

This prior question also has a great discussion of LRU cache options in Java.

How would you implement an LRU cache in Java?

Oaf answered 9/1, 2016 at 17:26 Comment(5)
Thanks, but I need it to be concurrent. References to ConcurrentLinkedHashMap lead to CaffeinePagano
@Pagano , I just edited my answer to state that I'm not aware of any library that provides both concurrency and strict, externally observable LRU results. However, I also added a potential rationale for why you don't really need absolutely strict LRU as a program requirement. At this point, the answer contains all the options I'm aware of for you. I hope this helps. Good luck!Oaf
@Pagano Caffeine v1.x used LRU and the goal in 2.x was to adopt a more efficient alternative. Iteration in eviction order is only exposed through the Policy api. This corresponds to ConcurrentLinkedHashMap's optional ascending/descending iterators. Note that CLHM is relatively strict LRU but due to concurrency some skew might be visible. Guava is strict if concurrencyLevel(1) is used, but does not expose eviction order.Panicle
@BenManes, thanks! Great to have a response from the author.Oaf
Yes, version 1.0.0 works as expected and cares about number of entries. It's a pity that in the last version 3.1.8 setting maximumSize(10) has no effect. Nevertheless thanks @BenManesKnox

© 2022 - 2024 — McMap. All rights reserved.