Caffeine does not implement LRU as its cache eviction policy. Instead, Caffeine uses a policy called TinyLFU. The Caffeine documentation includes a page on Efficiency, which discusses the rationale for this design choice. Quoting that page:
TinyLfu
relies on a frequency sketch to probabilistically estimate the historic usage of an entry.
Since Caffeine does not in fact implement LRU, I don't think you can reliably expect it to exhibit strict LRU behavior when you inspect the entries in the cache.
If you absolutely must have LRU behavior, then the JDK standard LinkedHashMap
is a good, straightforward choice. You would need to subclass it and override removeEldestEntry
with logic to signal when the cache has grown larger than you want it. If multi-threaded usage is necessary, then you'll need to wrap operations with appropriate synchronization.
Caffeine was heavily inspired by the Guava Cache, which similarly provides concurrent access and has approximate LRU behavior. A quick test of your code against the Guava cache shows similar results. I am not aware of any standard library that would provide predictable, externally observable LRU results and true concurrent access without a coarse-grained lock.
You might reconsider whether it's really a requirement to have strict, externally observable LRU results. By its nature, a cache is fast temporary storage to provide optimized lookups. I would not expect program behavior to change drastically dependent on whether the cache implements strict LRU, an approximation of LRU, LFU, or some other eviction policy.
This prior question also has a great discussion of LRU cache options in Java.
How would you implement an LRU cache in Java?