How would you implement an LRU cache in Java?
Asked Answered
C

21

175

Please don't say EHCache or OSCache, etc. Assume for purposes of this question that I want to implement my own using just the SDK (learning by doing). Given that the cache will be used in a multithreaded environment, which datastructures would you use? I've already implemented one using LinkedHashMap and Collections#synchronizedMap, but I'm curious if any of the new concurrent collections would be better candidates.

UPDATE: I was just reading through Yegge's latest when I found this nugget:

If you need constant-time access and want to maintain the insertion order, you can't do better than a LinkedHashMap, a truly wonderful data structure. The only way it could possibly be more wonderful is if there were a concurrent version. But alas.

I was thinking almost exactly the same thing before I went with the LinkedHashMap + Collections#synchronizedMap implementation I mentioned above. Nice to know I hadn't just overlooked something.

Based on the answers so far, it sounds like my best bet for a highly concurrent LRU would be to extend ConcurrentHashMap using some of the same logic that LinkedHashMap uses.

Cantharides answered 21/10, 2008 at 11:34 Comment(3)
O(1) required version: #23772602Bustamante
Very similar question also hereVilipend
A sample LRU implementation : hereFerruginous
C
19

If I were doing this again from scratch today, I'd use Guava's CacheBuilder.

Cantharides answered 13/12, 2012 at 2:21 Comment(0)
C
105

I like lots of these suggestions, but for now I think I'll stick with LinkedHashMap + Collections.synchronizedMap. If I do revisit this in the future, I'll probably work on extending ConcurrentHashMap in the same way LinkedHashMap extends HashMap.

UPDATE:

By request, here's the gist of my current implementation.

private class LruCache<A, B> extends LinkedHashMap<A, B> {
    private final int maxEntries;

    public LruCache(final int maxEntries) {
        super(maxEntries + 1, 1.0f, true);
        this.maxEntries = maxEntries;
    }

    /**
     * Returns <tt>true</tt> if this <code>LruCache</code> has more entries than the maximum specified when it was
     * created.
     *
     * <p>
     * This method <em>does not</em> modify the underlying <code>Map</code>; it relies on the implementation of
     * <code>LinkedHashMap</code> to do that, but that behavior is documented in the JavaDoc for
     * <code>LinkedHashMap</code>.
     * </p>
     *
     * @param eldest
     *            the <code>Entry</code> in question; this implementation doesn't care what it is, since the
     *            implementation is only dependent on the size of the cache
     * @return <tt>true</tt> if the oldest
     * @see java.util.LinkedHashMap#removeEldestEntry(Map.Entry)
     */
    @Override
    protected boolean removeEldestEntry(final Map.Entry<A, B> eldest) {
        return super.size() > maxEntries;
    }
}

Map<String, String> example = Collections.synchronizedMap(new LruCache<String, String>(CACHE_SIZE));
Cantharides answered 23/12, 2009 at 15:43 Comment(15)
I would however like to use encapsulation here instead of inheritance. This is something I learned from Effective Java.Yoruba
@KapilD It's been a while, but I'm almost positive the JavaDocs for LinkedHashMap explicitly endorse this method for creating an LRU implementation.Cantharides
I guess this is what you are referring too. commons.apache.org/collections/api/org/apache/commons/…Yoruba
@HankGay Java's LinkedHashMap (with third parameter = true) is not a LRU cache. This is because re-putting an entry does not affect the order of the entries (a real LRU cache will place the last-inserted entry at the back of the iteration order regardless of whether that entry initially exists in the cache)Eastwards
@Eastwards Is iteration-order actually part of the accepted contract for a key-value cache? As long as the least-recently used key gets evicted when needed, it would meet my expectations for something billing itself "LRU", since iteration order across a dict/map/hash/whatever usually isn't guaranteed.Cantharides
@HankGay no I wasn't talking about iteration-order, if I reput an entry into a LRU cache, that entry should be considered the "freshest". In other words, a new entry would not "kick" that entry out of the cache (unless the max size of the cache is 1). But if we reput an entry into a LinkedHashMap, that entry is not considered the "freshest", and any new entry may well "kick" that entry out of the cache.Eastwards
@Eastwards Ah. Yes, that is unexpected. I'll have to dig into that, thanks.Cantharides
@Pacerier, Good point. How about this - When an entry is to be put(), we first remove it and then put it again. This should ensure that it is the freshest entry in the cache.Leucite
@Eastwards I don't see this behavior at all. With accessOrder enabled map, all actions make an entry as most recently used (freshest): initial insertion, value update and value retrieval. Am I missing something?Unbalance
@Eastwards "re-putting an entry does not affect the order of the entries", this is incorrect. If you look into LinkedHashMap's implementation, for "put" method, it inherits implementation from HashMap. And HashMap's Javadoc says "If the map previously contained a mapping for the key, the old value is replaced". And if you check out its source code, when replacing the old value, it will call recordAccess method, and in LinkedHashMap's recordAccess method, it looks like this: if (lm.accessOrder) { lm.modCount++; remove(); addBefore(lm.header);}Dialect
@HankGay super(maxEntries + 1, 1.0f, true); I don't see why loadFactor needs to be 1.0f here. Isn't it better to set it to a lower value such as 0.75? Why making it a heavily loaded map instead of doing something like super(maxEntries/0.75, 0.75f, true)?Dialect
@Dialect I know the max size of the cache, so why would I ever want any load factor other than 1.0f? The goal of doing it this way is to have exactly one allocation (the initial one) and avoid re-hashing the existing contents of the Map when it has to be grown to hold the new element.Cantharides
@HankGay The data stored in the cache may change, and their hash code may collide when the hash map is heavily loaded. If you use a load factor 1.0, it is likely a hash map slot may contain multiple linked entries in it due to collision, which will lead to lower performance when you search them in the cache. If you use a smaller load factor for example 0.75, some space may be wasted but it will make searching faster because of less collision. When using a smaller load factor, if you use a big enough initial capacity, the map can have one allocation and avoid re-hashing too.Dialect
@Dialect Oracle is finally changing the way chaining is implemented to reduce the impact of malicious input: the chain will switch to a balanced tree impl if it gets too long. I think that lands in Java 8. Before that, you're kind of betting on few collisions (generally safe, unless you expect malicious input), and counting on the linear probe to be fairly localized (which may or may not be a good bet).Cantharides
y there is maxEntries + 1 ??Creech
C
19

If I were doing this again from scratch today, I'd use Guava's CacheBuilder.

Cantharides answered 13/12, 2012 at 2:21 Comment(0)
M
11

This is round two.

The first round was what I came up with then I reread the comments with the domain a bit more ingrained in my head.

So here is the simplest version with a unit test that shows it works based on some other versions.

First the non-concurrent version:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache<K, V> implements LruCache <K, V>{

    Map<K, V> map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap <K, V> (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry<K, V> eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).

Here is the test that proves it works as an LRU cache:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version...

package org.boon.cache;

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache<K, V> implements LruCache<K, V> {

    final CacheMap<K, V>[] cacheRegions;


    private static class CacheMap<K, V> extends LinkedHashMap<K, V> {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry<K, V> eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap<K, V> map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap<K, V> cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap<K, V> cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.

Here is the test to show that the concurrent version probably works. :) (Test under fire would be the real way).

public class SimpleConcurrentLRUCache {


    @Test
    public void test () {
        LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 1, 4, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );

        puts (cache);
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();

        cache.put ( 8, 8 );
        cache.put ( 9, 9 );

        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        puts (cache);


        if ( !ok ) die ();

    }


    @Test
    public void test2 () {
        LruCache <Integer, Integer> cache = new LruSimpleConcurrentCache<> ( 400, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        for (int index =0 ; index < 5_000; index++) {
            cache.get(0);
            cache.get ( 1 );
            cache.put ( 2, index  );
            cache.put ( 3, index );
            cache.put(index, index);
        }

        boolean ok = cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 1 ) == 1 || die ();
        ok |= cache.getSilent ( 2 ) != null || die ();
        ok |= cache.getSilent ( 3 ) != null || die ();

        ok |= cache.size () < 600 || die();
        if ( !ok ) die ();



    }

}

This is the last post.. The first post I deleted as it was a LFU not an LRU cache.

I thought I would give this another go. I was trying trying to come up with the simplest version of an LRU cache using the standard JDK w/o too much implementation.

Here is what I came up with. My first attempt was a bit of a disaster as I implemented a LFU instead of and LRU, and then I added FIFO, and LRU support to it... and then I realized it was becoming a monster. Then I started talking to my buddy John who was barely interested, and then I described at deep length how I implemented an LFU, LRU and FIFO and how you could switch it with a simple ENUM arg, and then I realized that all I really wanted was a simple LRU. So ignore the earlier post from me, and let me know if you want to see an LRU/LFU/FIFO cache that is switchable via an enum... no? Ok.. here he go.

The simplest possible LRU using just the JDK. I implemented both a concurrent version and a non-concurrent version.

I created a common interface (it is minimalism so likely missing a few features that you would like but it works for my use cases, but let if you would like to see feature XYZ let me know... I live to write code.).

public interface LruCache<KEY, VALUE> {
    void put ( KEY key, VALUE value );

    VALUE get ( KEY key );

    VALUE getSilent ( KEY key );

    void remove ( KEY key );

    int size ();
}

You may wonder what getSilent is. I use this for testing. getSilent does not change LRU score of an item.

First the non-concurrent one....

import java.util.Deque;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;

public class LruCacheNormal<KEY, VALUE> implements LruCache<KEY,VALUE> {

    Map<KEY, VALUE> map = new HashMap<> ();
    Deque<KEY> queue = new LinkedList<> ();
    final int limit;


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );

        /*If there was already an object under this key,
         then remove it before adding to queue
         Frequently used keys will be at the top so the search could be fast.
         */
        if ( oldValue != null ) {
            queue.removeFirstOccurrence ( key );
        }
        queue.addFirst ( key );

        if ( map.size () > limit ) {
            final KEY removedKey = queue.removeLast ();
            map.remove ( removedKey );
        }

    }


    public VALUE get ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        queue.addFirst ( key );
        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

The queue.removeFirstOccurrence is a potentially expensive operation if you have a large cache. One could take LinkedList as an example and add a reverse lookup hash map from element to node to make remove operations A LOT FASTER and more consistent. I started too, but then realized I don't need it. But... maybe...

When put is called, the key gets added to the queue. When get is called, the key gets removed and re-added to the top of the queue.

If your cache is small and the building an item is expensive then this should be a good cache. If your cache is really large, then the linear search could be a bottle neck especially if you don't have hot areas of cache. The more intense the hot spots, the faster the linear search as hot items are always at the top of the linear search. Anyway... what is needed for this to go faster is write another LinkedList that has a remove operation that has reverse element to node lookup for remove, then removing would be about as fast as removing a key from a hash map.

If you have a cache under 1,000 items, this should work out fine.

Here is a simple test to show its operations in action.

public class LruCacheTest {

    @Test
    public void test () {
        LruCache<Integer, Integer> cache = new LruCacheNormal<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == null || die ();
        ok |= cache.getSilent ( 1 ) == null || die ();
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();

        if ( !ok ) die ();

    }
}

The last LRU cache was single threaded, and please don't wrap it in a synchronized anything....

Here is a stab at a concurrent version.

import java.util.Deque;
import java.util.LinkedList;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;

public class ConcurrentLruCache<KEY, VALUE> implements LruCache<KEY,VALUE> {

    private final ReentrantLock lock = new ReentrantLock ();


    private final Map<KEY, VALUE> map = new ConcurrentHashMap<> ();
    private final Deque<KEY> queue = new LinkedList<> ();
    private final int limit;


    public ConcurrentLruCache ( int limit ) {
        this.limit = limit;
    }

    @Override
    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );
        if ( oldValue != null ) {
            removeThenAddKey ( key );
        } else {
            addKey ( key );
        }
        if (map.size () > limit) {
            map.remove ( removeLast() );
        }
    }


    @Override
    public VALUE get ( KEY key ) {
        removeThenAddKey ( key );
        return map.get ( key );
    }


    private void addKey(KEY key) {
        lock.lock ();
        try {
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }


    }

    private KEY removeLast( ) {
        lock.lock ();
        try {
            final KEY removedKey = queue.removeLast ();
            return removedKey;
        } finally {
            lock.unlock ();
        }
    }

    private void removeThenAddKey(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }

    }

    private void removeFirstOccurrence(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
        } finally {
            lock.unlock ();
        }

    }


    @Override
    public VALUE getSilent ( KEY key ) {
        return map.get ( key );
    }

    @Override
    public void remove ( KEY key ) {
        removeFirstOccurrence ( key );
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString () {
        return map.toString ();
    }
}

The main differences are the use of the ConcurrentHashMap instead of HashMap, and the use of the Lock (I could have gotten away with synchronized, but...).

I have not tested it under fire, but it seems like a simple LRU cache that might work out in 80% of use cases where you need a simple LRU map.

I welcome feedback, except the why don't you use library a, b, or c. The reason I don't always use a library is because I don't always want every war file to be 80MB, and I write libraries so I tend to make the libs plug-able with a good enough solution in place and someone can plug-in another cache provider if they like. :) I never know when someone might need Guava or ehcache or something else I don't want to include them, but if I make caching plug-able, I will not exclude them either.

Reduction of dependencies has its own reward. I love to get some feedback on how to make this even simpler or faster or both.

Also if anyone knows of a ready to go....

Ok.. I know what you are thinking... Why doesn't he just use removeEldest entry from LinkedHashMap, and well I should but.... but.. but.. That would be a FIFO not an LRU and we were trying to implement a LRU.

    Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
            return this.size () > limit;
        }
    };

This test fails for the above code...

        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();

So here is a quick and dirty FIFO cache using removeEldestEntry.

import java.util.*;

public class FifoCache<KEY, VALUE> implements LruCache<KEY,VALUE> {

    final int limit;

    Map<KEY, VALUE> map = new LinkedHashMap<KEY, VALUE> () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry<KEY, VALUE> eldest ) {
            return this.size () > limit;
        }
    };


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
         map.put ( key, value );


    }


    public VALUE get ( KEY key ) {

        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

FIFOs are fast. No searching around. You could front a FIFO in front of an LRU and that would handle most hot entries quite nicely. A better LRU is going to need that reverse element to Node feature.

Anyway... now that I wrote some code, let me go through the other answers and see what I missed... the first time I scanned them.

Mane answered 17/12, 2013 at 21:23 Comment(0)
N
10

LinkedHashMap is O(1), but requires synchronization. No need to reinvent the wheel there.

2 options for increasing concurrency:

1. Create multiple LinkedHashMap, and hash into them: example: LinkedHashMap[4], index 0, 1, 2, 3. On the key do key%4 (or binary OR on [key, 3]) to pick which map to do a put/get/remove.

2. You could do an 'almost' LRU by extending ConcurrentHashMap, and having a linked hash map like structure in each of the regions inside of it. Locking would occur more granularly than a LinkedHashMap that is synchronized. On a put or putIfAbsent only a lock on the head and tail of the list is needed (per region). On a remove or get the whole region needs to be locked. I'm curious if Atomic linked lists of some sort might help here -- probably so for the head of the list. Maybe for more.

The structure would not keep the total order, but only the order per region. As long as the number of entries is much larger than the number of regions, this is good enough for most caches. Each region will have to have its own entry count, this would be used rather than the global count for the eviction trigger. The default number of regions in a ConcurrentHashMap is 16, which is plenty for most servers today.

  1. would be easier to write and faster under moderate concurrency.

  2. would be more difficult to write but scale much better at very high concurrency. It would be slower for normal access (just as ConcurrentHashMap is slower than HashMap where there is no concurrency)

Nomanomad answered 5/3, 2009 at 3:33 Comment(0)
L
8

There are two open source implementations.

Apache Solr has ConcurrentLRUCache: https://lucene.apache.org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html

There's an open source project for a ConcurrentLinkedHashMap: http://code.google.com/p/concurrentlinkedhashmap/

Lammond answered 16/7, 2010 at 23:53 Comment(3)
Solr's solution isn't actually LRU, but the ConcurrentLinkedHashMap is interesting. It claims to have been rolled into MapMaker from Guava, but I didn't spot it in the docs. Any idea what's going with that effort?Cantharides
A simplified version was integrated, but the tests have not been complete so it is not yet public. I had a lot of problems doing a deeper integration, but I do hope to finish it as there are some nice algorithmic properties. The ability to listen to an eviction (capacity, expiration, GC) was added and is based on CLHM's approach (listener queue). I would also like to contribute the idea of "weighted values" as well, since that is useful when caching collections. Unfortunately due to other commitments I have been too swamped to devote the time Guava deserves (and that I promised Kevin/Charles).Shelled
Update: The integration was completed and public in Guava r08. This through the #maximumSize() setting.Shelled
M
7

I would consider using java.util.concurrent.PriorityBlockingQueue, with priority determined by a "numberOfUses" counter in each element. I would be very, very careful to get all my synchronisation correct, as the "numberOfUses" counter implies that the element can't be immutable.

The element object would be a wrapper for the objects in the cache:

class CacheElement {
    private final Object obj;
    private int numberOfUsers = 0;

    CacheElement(Object obj) {
        this.obj = obj;
    }

    ... etc.
}
Manageable answered 21/10, 2008 at 11:41 Comment(3)
dont you mean must be immutable?Supervene
note that if you attempt to do the priorityblockingqueue version mentioned by steve mcleod, you should make the element immutable, because modifying the element while in the queue will have no affect, you will need to remove the element and re-add it in order to re-prioritize it.Nittygritty
James below points an error I made. Which I offer as evidence as to just how bleeding difficult it is to write reliable, sturdy caches.Manageable
I
7

Hope this helps .

import java.util.*;
public class Lru {

public static <K,V> Map<K,V> lruCache(final int maxSize) {
    return new LinkedHashMap<K, V>(maxSize*4/3, 0.75f, true) {

        private static final long serialVersionUID = -3588047435434569014L;

        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    };
 }
 public static void main(String[] args ) {
    Map<Object, Object> lru = Lru.lruCache(2);      
    lru.put("1", "1");
    lru.put("2", "2");
    lru.put("3", "3");
    System.out.println(lru);
}
}
Imaimage answered 28/10, 2014 at 5:1 Comment(2)
Nice example! Could you comment why need to set capacity maxSize*4/3?Lavatory
@Lavatory it's called the initial capacity , can be any [integer] value whereas 0.75f is the default load factor, hope this link helps : ashishsharma.me/2011/09/custom-lru-cache-java.htmlImaimage
P
5

LRU Cache can be implemented using a ConcurrentLinkedQueue and a ConcurrentHashMap which can be used in multithreading scenario as well. The head of the queue is that element that has been on the queue the longest time. The tail of the queue is that element that has been on the queue the shortest time. When an element exists in the Map, we can remove it from the LinkedQueue and insert it at the tail.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

public class LRUCache<K,V> {
  private ConcurrentHashMap<K,V> map;
  private ConcurrentLinkedQueue<K> queue;
  private final int size; 

  public LRUCache(int size) {
    this.size = size;
    map = new ConcurrentHashMap<K,V>(size);
    queue = new ConcurrentLinkedQueue<K>();
  }

  public V get(K key) {
    //Recently accessed, hence move it to the tail
    queue.remove(key);
    queue.add(key);
    return map.get(key);
  }

  public void put(K key, V value) {
    //ConcurrentHashMap doesn't allow null key or values
    if(key == null || value == null) throw new NullPointerException();
    if(map.containsKey(key) {
      queue.remove(key);
    }
    if(queue.size() >= size) {
      K lruKey = queue.poll();
      if(lruKey != null) {
        map.remove(lruKey);
      }
    }
    queue.add(key);
    map.put(key,value);
  }

}
Phaedrus answered 6/4, 2014 at 8:41 Comment(2)
This is not threadsafe. For example you can easily exceed the max LRU size by concurrently calling put.Urceolate
Please correct it. First of all it does not compile on line map.containsKey(key). Secondly in get() you should check whether the key was really removed otherwise map and queue become out of sync and "queue.size() >= size" becomes always true. I will post my version having this fixed as I liked your idea of using these two collections.Goebbels
H
3

Here is my implementation for LRU. I have used PriorityQueue, which basically works as FIFO and not threadsafe. Used Comparator based on the page time creation and based on the performs the ordering of the pages for the least recently used time.

Pages for consideration : 2, 1, 0, 2, 8, 2, 4

Page added into cache is : 2
Page added into cache is : 1
Page added into cache is : 0
Page: 2 already exisit in cache. Last accessed time updated
Page Fault, PAGE: 1, Replaced with PAGE: 8
Page added into cache is : 8
Page: 2 already exisit in cache. Last accessed time updated
Page Fault, PAGE: 0, Replaced with PAGE: 4
Page added into cache is : 4

OUTPUT

LRUCache Pages
-------------
PageName: 8, PageCreationTime: 1365957019974
PageName: 2, PageCreationTime: 1365957020074
PageName: 4, PageCreationTime: 1365957020174

enter code here

import java.util.Comparator;
import java.util.Iterator;
import java.util.PriorityQueue;


public class LRUForCache {
    private PriorityQueue<LRUPage> priorityQueue = new PriorityQueue<LRUPage>(3, new LRUPageComparator());
    public static void main(String[] args) throws InterruptedException {

        System.out.println(" Pages for consideration : 2, 1, 0, 2, 8, 2, 4");
        System.out.println("----------------------------------------------\n");

        LRUForCache cache = new LRUForCache();
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("1"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("0"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("8"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("2"));
        Thread.sleep(100);
        cache.addPageToQueue(new LRUPage("4"));
        Thread.sleep(100);

        System.out.println("\nLRUCache Pages");
        System.out.println("-------------");
        cache.displayPriorityQueue();
    }


    public synchronized void  addPageToQueue(LRUPage page){
        boolean pageExists = false;
        if(priorityQueue.size() == 3){
            Iterator<LRUPage> iterator = priorityQueue.iterator();

            while(iterator.hasNext()){
                LRUPage next = iterator.next();
                if(next.getPageName().equals(page.getPageName())){
                    /* wanted to just change the time, so that no need to poll and add again.
                       but elements ordering does not happen, it happens only at the time of adding
                       to the queue

                       In case somebody finds it, plz let me know.
                     */
                    //next.setPageCreationTime(page.getPageCreationTime()); 

                    priorityQueue.remove(next);
                    System.out.println("Page: " + page.getPageName() + " already exisit in cache. Last accessed time updated");
                    pageExists = true;
                    break;
                }
            }
            if(!pageExists){
                // enable it for printing the queue elemnts
                //System.out.println(priorityQueue);
                LRUPage poll = priorityQueue.poll();
                System.out.println("Page Fault, PAGE: " + poll.getPageName()+", Replaced with PAGE: "+page.getPageName());

            }
        }
        if(!pageExists){
            System.out.println("Page added into cache is : " + page.getPageName());
        }
        priorityQueue.add(page);

    }

    public void displayPriorityQueue(){
        Iterator<LRUPage> iterator = priorityQueue.iterator();
        while(iterator.hasNext()){
            LRUPage next = iterator.next();
            System.out.println(next);
        }
    }
}

class LRUPage{
    private String pageName;
    private long pageCreationTime;
    public LRUPage(String pagename){
        this.pageName = pagename;
        this.pageCreationTime = System.currentTimeMillis();
    }

    public String getPageName() {
        return pageName;
    }

    public long getPageCreationTime() {
        return pageCreationTime;
    }

    public void setPageCreationTime(long pageCreationTime) {
        this.pageCreationTime = pageCreationTime;
    }

    @Override
    public boolean equals(Object obj) {
        LRUPage page = (LRUPage)obj; 
        if(pageCreationTime == page.pageCreationTime){
            return true;
        }
        return false;
    }

    @Override
    public int hashCode() {
        return (int) (31 * pageCreationTime);
    }

    @Override
    public String toString() {
        return "PageName: " + pageName +", PageCreationTime: "+pageCreationTime;
    }
}


class LRUPageComparator implements Comparator<LRUPage>{

    @Override
    public int compare(LRUPage o1, LRUPage o2) {
        if(o1.getPageCreationTime() > o2.getPageCreationTime()){
            return 1;
        }
        if(o1.getPageCreationTime() < o2.getPageCreationTime()){
            return -1;
        }
        return 0;
    }
}
Hoodoo answered 14/4, 2013 at 16:36 Comment(0)
K
2

Well for a cache you will generally be looking up some piece of data via a proxy object, (a URL, String....) so interface-wise you are going to want a map. but to kick things out you want a queue like structure. Internally I would maintain two data structures, a Priority-Queue and a HashMap. heres an implementation that should be able to do everything in O(1) time.

Here's a class I whipped up pretty quick:

import java.util.HashMap;
import java.util.Map;
public class LRUCache<K, V>
{
    int maxSize;
    int currentSize = 0;

    Map<K, ValueHolder<K, V>> map;
    LinkedList<K> queue;

    public LRUCache(int maxSize)
    {
        this.maxSize = maxSize;
        map = new HashMap<K, ValueHolder<K, V>>();
        queue = new LinkedList<K>();
    }

    private void freeSpace()
    {
        K k = queue.remove();
        map.remove(k);
        currentSize--;
    }

    public void put(K key, V val)
    {
        while(currentSize >= maxSize)
        {
            freeSpace();
        }
        if(map.containsKey(key))
        {//just heat up that item
            get(key);
            return;
        }
        ListNode<K> ln = queue.add(key);
        ValueHolder<K, V> rv = new ValueHolder<K, V>(val, ln);
        map.put(key, rv);       
        currentSize++;
    }

    public V get(K key)
    {
        ValueHolder<K, V> rv = map.get(key);
        if(rv == null) return null;
        queue.remove(rv.queueLocation);
        rv.queueLocation = queue.add(key);//this ensures that each item has only one copy of the key in the queue
        return rv.value;
    }
}

class ListNode<K>
{
    ListNode<K> prev;
    ListNode<K> next;
    K value;
    public ListNode(K v)
    {
        value = v;
        prev = null;
        next = null;
    }
}

class ValueHolder<K,V>
{
    V value;
    ListNode<K> queueLocation;
    public ValueHolder(V value, ListNode<K> ql)
    {
        this.value = value;
        this.queueLocation = ql;
    }
}

class LinkedList<K>
{
    ListNode<K> head = null;
    ListNode<K> tail = null;

    public ListNode<K> add(K v)
    {
        if(head == null)
        {
            assert(tail == null);
            head = tail = new ListNode<K>(v);
        }
        else
        {
            tail.next = new ListNode<K>(v);
            tail.next.prev = tail;
            tail = tail.next;
            if(tail.prev == null)
            {
                tail.prev = head;
                head.next = tail;
            }
        }
        return tail;
    }

    public K remove()
    {
        if(head == null)
            return null;
        K val = head.value;
        if(head.next == null)
        {
            head = null;
            tail = null;
        }
        else
        {
            head = head.next;
            head.prev = null;
        }
        return val;
    }

    public void remove(ListNode<K> ln)
    {
        ListNode<K> prev = ln.prev;
        ListNode<K> next = ln.next;
        if(prev == null)
        {
            head = next;
        }
        else
        {
            prev.next = next;
        }
        if(next == null)
        {
            tail = prev;
        }
        else
        {
            next.prev = prev;
        }       
    }
}

Here's how it works. Keys are stored in a linked list with the oldest keys in the front of the list (new keys go to the back) so when you need to 'eject' something you just pop it off the front of the queue and then use the key to remove the value from the map. When an item gets referenced you grab the ValueHolder from the map and then use the queuelocation variable to remove the key from its current location in the queue and then put it at the back of the queue (its now the most recently used). Adding things is pretty much the same.

I'm sure theres a ton of errors here and I haven't implemented any synchronization. but this class will provide O(1) adding to the cache, O(1) removal of old items, and O(1) retrieval of cache items. Even a trivial synchronization (just synchronize every public method) would still have little lock contention due to the run time. If anyone has any clever synchronization tricks I would be very interested. Also, I'm sure there are some additional optimizations that you could implement using the maxsize variable with respect to the map.

Kayak answered 22/10, 2008 at 2:22 Comment(3)
Thanks for the level of detail, but where does this provide a win over the LinkedHashMap + Collections.synchronizedMap() implementation?Cantharides
Performance, I don't know for sure, but I don't think LinkedHashMap has O(1) insertion (its probably O(log(n))), actually you could add a few methods to complete the map interface in my implementation and then use Collections.synchronizedMap to add concurrency.Kayak
In LinkedList class above in the add method there is a code in else block i.e. if(tail.prev == null) { tail.prev = head; head.next = tail; } When this code will be executed? I ran few dry runs and I think this will never be executed and should be removed.Incursive
K
2

Here is my tested best performing concurrent LRU cache implementation without any synchronized block:

public class ConcurrentLRUCache<Key, Value> {

private final int maxSize;

private ConcurrentHashMap<Key, Value> map;
private ConcurrentLinkedQueue<Key> queue;

public ConcurrentLRUCache(final int maxSize) {
    this.maxSize = maxSize;
    map = new ConcurrentHashMap<Key, Value>(maxSize);
    queue = new ConcurrentLinkedQueue<Key>();
}

/**
 * @param key - may not be null!
 * @param value - may not be null!
 */
public void put(final Key key, final Value value) {
    if (map.containsKey(key)) {
        queue.remove(key); // remove the key from the FIFO queue
    }

    while (queue.size() >= maxSize) {
        Key oldestKey = queue.poll();
        if (null != oldestKey) {
            map.remove(oldestKey);
        }
    }
    queue.add(key);
    map.put(key, value);
}

/**
 * @param key - may not be null!
 * @return the value associated to the given key or null
 */
public Value get(final Key key) {
    return map.get(key);
}

}

Khorma answered 26/5, 2011 at 11:46 Comment(4)
@zoltan boda.... you have not handled one situation .. what if the same object gets used multiple times? in this case we should not add multiple entries for the same object... instead its key should beToitoiboid
Warning: This is not a LRU cache. In an LRU cache, you throw away the items least recently accessed. This one throws away the items least recently written. It's also a linear scan to perform the queue.remove(key) operation.Eriha
Also ConcurrentLinkedQueue#size() is not a constant time operation.Giamo
Your put method does not look safe - it has a few check-then-act statements that will break with multiple threads.Underfoot
Z
2

This is the LRU cache I use, which encapsulates a LinkedHashMap and handles concurrency with a simple synchronize lock guarding the juicy spots. It "touches" elements as they are used so that they become the "freshest" element again, so that it is actually LRU. I also had the requirement of my elements having a minimum lifespan, which you can also think of as "maximum idle time" permitted, then you're up for eviction.

However, I agree with Hank's conclusion and accepted answer -- if I were starting this again today, I'd check out Guava's CacheBuilder.

import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;


public class MaxIdleLRUCache<KK, VV> {

    final static private int IDEAL_MAX_CACHE_ENTRIES = 128;

    public interface DeadElementCallback<KK, VV> {
        public void notify(KK key, VV element);
    }

    private Object lock = new Object();
    private long minAge;
    private HashMap<KK, Item<VV>> cache;


    public MaxIdleLRUCache(long minAgeMilliseconds) {
        this(minAgeMilliseconds, IDEAL_MAX_CACHE_ENTRIES);
    }

    public MaxIdleLRUCache(long minAgeMilliseconds, int idealMaxCacheEntries) {
        this(minAgeMilliseconds, idealMaxCacheEntries, null);
    }

    public MaxIdleLRUCache(long minAgeMilliseconds, int idealMaxCacheEntries, final DeadElementCallback<KK, VV> callback) {
        this.minAge = minAgeMilliseconds;
        this.cache = new LinkedHashMap<KK, Item<VV>>(IDEAL_MAX_CACHE_ENTRIES + 1, .75F, true) {
            private static final long serialVersionUID = 1L;

            // This method is called just after a new entry has been added
            public boolean removeEldestEntry(Map.Entry<KK, Item<VV>> eldest) {
                // let's see if the oldest entry is old enough to be deleted. We don't actually care about the cache size.
                long age = System.currentTimeMillis() - eldest.getValue().birth;
                if (age > MaxIdleLRUCache.this.minAge) {
                    if ( callback != null ) {
                        callback.notify(eldest.getKey(), eldest.getValue().payload);
                    }
                    return true; // remove it
                }
                return false; // don't remove this element
            }
        };

    }

    public void put(KK key, VV value) {
        synchronized ( lock ) {
//          System.out.println("put->"+key+","+value);
            cache.put(key, new Item<VV>(value));
        }
    }

    public VV get(KK key) {
        synchronized ( lock ) {
//          System.out.println("get->"+key);
            Item<VV> item = getItem(key);
            return item == null ? null : item.payload;
        }
    }

    public VV remove(String key) {
        synchronized ( lock ) {
//          System.out.println("remove->"+key);
            Item<VV> item =  cache.remove(key);
            if ( item != null ) {
                return item.payload;
            } else {
                return null;
            }
        }
    }

    public int size() {
        synchronized ( lock ) {
            return cache.size();
        }
    }

    private Item<VV> getItem(KK key) {
        Item<VV> item = cache.get(key);
        if (item == null) {
            return null;
        }
        item.touch(); // idle the item to reset the timeout threshold
        return item;
    }

    private static class Item<T> {
        long birth;
        T payload;

        Item(T payload) {
            this.birth = System.currentTimeMillis();
            this.payload = payload;
        }

        public void touch() {
            this.birth = System.currentTimeMillis();
        }
    }

}
Zygoma answered 21/2, 2013 at 17:2 Comment(0)
T
2

Best way to achieve is to use a LinkedHashMap that maintains insertion order of elements. Following is an sample code:

public class Solution {

Map<Integer,Integer> cache;
int capacity;
public Solution(int capacity) {
    this.cache = new LinkedHashMap<Integer,Integer>(capacity); 
    this.capacity = capacity;

}

// This function returns false if key is not 
// present in cache. Else it moves the key to 
// front by first removing it and then adding 
// it, and returns true. 

public int get(int key) {
if (!cache.containsKey(key)) 
        return -1; 
    int value = cache.get(key);
    cache.remove(key); 
    cache.put(key,value); 
    return cache.get(key); 

}

public void set(int key, int value) {

    // If already present, then  
    // remove it first we are going to add later 
       if(cache.containsKey(key)){
        cache.remove(key);
    }
     // If cache size is full, remove the least 
    // recently used. 
    else if (cache.size() == capacity) { 
        Iterator<Integer> iterator = cache.keySet().iterator();
        cache.remove(iterator.next()); 
    }
        cache.put(key,value);
}

}

Teetotaler answered 3/5, 2020 at 10:55 Comment(0)
P
1

Have a look at ConcurrentSkipListMap. It should give you log(n) time for testing and removing an element if it is already contained in the cache, and constant time for re-adding it.

You'd just need some counter etc and wrapper element to force ordering of the LRU order and ensure recent stuff is discarded when the cache is full.

Photobathic answered 21/10, 2008 at 12:17 Comment(3)
Would ConcurrentSkipListMap provide some ease-of-implementation benefit over ConcurrentHashMap, or is it simply a case of avoiding pathological cases?Cantharides
It would make things simpler, as ConcurrentSkipListMap orders elements, which would allow you to manage what order things were used in. ConcurrentHashMap doesn't do this, so you'd basically have to iterate over the entire cache contents to update an elements's 'last used counter' or whateverPhotobathic
So with the ConcurrentSkipListMap implementation, I would create a new implementation of the Map interface that delegates to ConcurrentSkipListMap and performs some sort of wrapping so that the arbitrary key types are wrapped in a type that is easily sorted based on the last access?Cantharides
K
1

Here is my short implementation, please criticize or improve it!

package util.collection;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

/**
 * Limited size concurrent cache map implementation.<br/>
 * LRU: Least Recently Used.<br/>
 * If you add a new key-value pair to this cache after the maximum size has been exceeded,
 * the oldest key-value pair will be removed before adding.
 */

public class ConcurrentLRUCache<Key, Value> {

private final int maxSize;
private int currentSize = 0;

private ConcurrentHashMap<Key, Value> map;
private ConcurrentLinkedQueue<Key> queue;

public ConcurrentLRUCache(final int maxSize) {
    this.maxSize = maxSize;
    map = new ConcurrentHashMap<Key, Value>(maxSize);
    queue = new ConcurrentLinkedQueue<Key>();
}

private synchronized void freeSpace() {
    Key key = queue.poll();
    if (null != key) {
        map.remove(key);
        currentSize = map.size();
    }
}

public void put(Key key, Value val) {
    if (map.containsKey(key)) {// just heat up that item
        put(key, val);
        return;
    }
    while (currentSize >= maxSize) {
        freeSpace();
    }
    synchronized(this) {
        queue.add(key);
        map.put(key, val);
        currentSize++;
    }
}

public Value get(Key key) {
    return map.get(key);
}
}
Khorma answered 25/5, 2011 at 13:56 Comment(1)
This is not LRU cache, just is FIFO cache.Bankhead
C
1

Here's my own implementation to this problem

simplelrucache provides threadsafe, very simple, non-distributed LRU caching with TTL support. It provides two implementations:

  • Concurrent based on ConcurrentLinkedHashMap
  • Synchronized based on LinkedHashMap

You can find it here: http://code.google.com/p/simplelrucache/

Coloration answered 21/12, 2011 at 12:18 Comment(0)
M
0

I'm looking for a better LRU cache using Java code. Is it possible for you to share your Java LRU cache code using LinkedHashMap and Collections#synchronizedMap? Currently I'm using LRUMap implements Map and the code works fine, but I'm getting ArrayIndexOutofBoundException on load testing using 500 users on the below method. The method moves the recent object to front of the queue.

private void moveToFront(int index) {
        if (listHead != index) {
            int thisNext = nextElement[index];
            int thisPrev = prevElement[index];
            nextElement[thisPrev] = thisNext;
            if (thisNext >= 0) {
                prevElement[thisNext] = thisPrev;
            } else {
                listTail = thisPrev;
            }
            //old listHead and new listHead say new is 1 and old was 0 then prev[1]= 1 is the head now so no previ so -1
            // prev[0 old head] = new head right ; next[new head] = old head
            prevElement[index] = -1;
            nextElement[index] = listHead;
            prevElement[listHead] = index;
            listHead = index;
        }
    }

get(Object key) and put(Object key, Object value) method calls the above moveToFront method.

Mestas answered 9/8, 2010 at 21:7 Comment(0)
F
0

Wanted to add comment to the answer given by Hank but some how I am not able to - please treat it as comment

LinkedHashMap maintains access order as well based on parameter passed in its constructor It keeps doubly lined list to maintain order (See LinkedHashMap.Entry)

@Pacerier it is correct that LinkedHashMap keeps same order while iteration if element is added again but that is only in case of insertion order mode.

this is what I found in java docs of LinkedHashMap.Entry object

    /**
     * This method is invoked by the superclass whenever the value
     * of a pre-existing entry is read by Map.get or modified by Map.set.
     * If the enclosing Map is access-ordered, it moves the entry
     * to the end of the list; otherwise, it does nothing.
     */
    void recordAccess(HashMap<K,V> m) {
        LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m;
        if (lm.accessOrder) {
            lm.modCount++;
            remove();
            addBefore(lm.header);
        }
    }

this method takes care of moving recently accessed element to end of the list. So all in all LinkedHashMap is best data structure for implementing LRUCache.

Foretell answered 8/8, 2012 at 6:0 Comment(0)
H
0

Another thought and even a simple implementation using LinkedHashMap collection of Java.

LinkedHashMap provided method removeEldestEntry and which can be overridden in the way mentioned in example. By default implementation of this collection structure is false. If its true and size of this structure goes beyond the initial capacity than eldest or older elements will be removed.

We can have a pageno and page content in my case pageno is integer and pagecontent i have kept page number values string.

import java.util.LinkedHashMap;
import java.util.Map;

/**
 * @author Deepak Singhvi
 *
 */
public class LRUCacheUsingLinkedHashMap {


     private static int CACHE_SIZE = 3;
     public static void main(String[] args) {
        System.out.println(" Pages for consideration : 2, 1, 0, 2, 8, 2, 4,99");
        System.out.println("----------------------------------------------\n");


// accessOrder is true, so whenever any page gets changed or accessed,    // its order will change in the map, 
              LinkedHashMap<Integer,String> lruCache = new              
                 LinkedHashMap<Integer,String>(CACHE_SIZE, .75F, true) {

           private static final long serialVersionUID = 1L;

           protected boolean removeEldestEntry(Map.Entry<Integer,String>                           

                     eldest) {
                          return size() > CACHE_SIZE;
                     }

                };

  lruCache.put(2, "2");
  lruCache.put(1, "1");
  lruCache.put(0, "0");
  System.out.println(lruCache + "  , After first 3 pages in cache");
  lruCache.put(2, "2");
  System.out.println(lruCache + "  , Page 2 became the latest page in the cache");
  lruCache.put(8, "8");
  System.out.println(lruCache + "  , Adding page 8, which removes eldest element 2 ");
  lruCache.put(2, "2");
  System.out.println(lruCache+ "  , Page 2 became the latest page in the cache");
  lruCache.put(4, "4");
  System.out.println(lruCache+ "  , Adding page 4, which removes eldest element 1 ");
  lruCache.put(99, "99");
  System.out.println(lruCache + " , Adding page 99, which removes eldest element 8 ");

     }

}

Result of above code execution is as follows:

 Pages for consideration : 2, 1, 0, 2, 8, 2, 4,99
--------------------------------------------------
    {2=2, 1=1, 0=0}  , After first 3 pages in cache
    {2=2, 1=1, 0=0}  , Page 2 became the latest page in the cache
    {1=1, 0=0, 8=8}  , Adding page 8, which removes eldest element 2 
    {0=0, 8=8, 2=2}  , Page 2 became the latest page in the cache
    {8=8, 2=2, 4=4}  , Adding page 4, which removes eldest element 1 
    {2=2, 4=4, 99=99} , Adding page 99, which removes eldest element 8 
Hoodoo answered 25/4, 2013 at 13:2 Comment(2)
That is a FIFO. He asked for an LRU.Mane
It fails this test... cache.get ( 2 ); cache.get ( 3 ); cache.put ( 6, 6 ); cache.put ( 7, 7 ); ok |= cache.size () == 4 || die ( "size" + cache.size () ); ok |= cache.getSilent ( 2 ) == 2 || die (); ok |= cache.getSilent ( 3 ) == 3 || die (); ok |= cache.getSilent ( 4 ) == null || die (); ok |= cache.getSilent ( 5 ) == null || die ();Mane
G
0

Following the @sanjanab concept (but after fixes) I made my version of the LRUCache providing also the Consumer that allows to do something with the removed items if needed.

public class LRUCache<K, V> {

    private ConcurrentHashMap<K, V> map;
    private final Consumer<V> onRemove;
    private ConcurrentLinkedQueue<K> queue;
    private final int size;

    public LRUCache(int size, Consumer<V> onRemove) {
        this.size = size;
        this.onRemove = onRemove;
        this.map = new ConcurrentHashMap<>(size);
        this.queue = new ConcurrentLinkedQueue<>();
    }

    public V get(K key) {
        //Recently accessed, hence move it to the tail
        if (queue.remove(key)) {
            queue.add(key);
            return map.get(key);
        }
        return null;
    }

    public void put(K key, V value) {
        //ConcurrentHashMap doesn't allow null key or values
        if (key == null || value == null) throw new IllegalArgumentException("key and value cannot be null!");

        V existing = map.get(key);
        if (existing != null) {
            queue.remove(key);
            onRemove.accept(existing);
        }

        if (map.size() >= size) {
            K lruKey = queue.poll();
            if (lruKey != null) {
                V removed = map.remove(lruKey);
                onRemove.accept(removed);
            }
        }
        queue.add(key);
        map.put(key, value);
    }
}
Goebbels answered 30/1, 2019 at 17:50 Comment(0)
L
-1

Android offers an implementation of an LRU Cache. The code is clean and straightforward.

Luigiluigino answered 18/9, 2014 at 9:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.