ConcurrentLinkedHashMap
offered access-order iteration via the ascending and descending snapshots. This was useful for cases like persisting the hottest entries for a warm server restart. Otherwise views and iterators were in encounter order by the underlying ConcurrentHashMap
.
Caffeine offers snapshots by in the eviction policy's hottest/coldest order and the expiration policy's youngest/oldest order. See Cache.policy()
.
Insertion-order is not that difficult to implement because writes are rare, exclusive, and items are not being shifted around frequently. The simplest approach is to use a Map computation to maintain a secondary data structure. If you want that in addition to the cache then Caffeine's asMap() view offers atomic versions. This way you can have fast random access reads via a ConcurrentHashMap
and ordered iterations for a background process.
var data = new ConcurrentHashMap<String, Integer>();
var order = Collections.synchronizedMap(new LinkedHashMap<String, Integer>());
data.compute("a", (key, oldValue) -> {
order.put(key, 1);
return 1;
});
// Locks the map during access, blocking writes; consider snapshotting first
order.forEach((key, value) -> System.out.printf("%s=%s%n", key, value);
Vavr's LinkedHashMap might be useful if you need concurrent iteration or want to avoid the cost of a full copy. That is a persistent data structure, which where writes produce a new immutable version. That could be used above or directly if you don't need any caching concepts.
volatile Map<K, V> data = LinkedHashMap.empty();
final Lock lock = new ReentrantLock();
// Single writer
lock.lock();
try {
data = data.put(1, 2).put(3, 4);
} finally {
lock.unlock();
}
// Multiple readers
System.out.println(data);