Cache evict on one of multiple keys
Asked Answered
D

2

9

In my application I have multiple cacheable methods with multiple keys:

@Cacheable(cacheNames = "valueCodes", key = "{#value, #fieldId, #projectId}")
@Cacheable(cacheNames = "fieldNames", key = "{#field, #value, #projectId}")
@Cacheable(cacheNames = "qi", key = "{#langCode, #question, #projectId}")
@Cacheable(cacheNames = "fieldCodes", key = "{#name, #projectId}")

Now I want a cachevict method which cleares all the caches where only the #projectId key, which is a UUID, matches:

@CacheEvict(value = {"valueCodes", "fieldCodes", "qi"; "fieldCodes"}, key = "#projectId")

I've read in this article that this is not possible and that

Only the evict annotation's key regex matching more than one element in each of the cacheNames

I'm not really sure what they mean by that, but I guess it has something to do with using regex in SpEL.

So I started thinking about concatinating my keys into one key:

@Cacheable(cacheNames="cahceName", key="concat(#projectId).concat(#otherKey)")

and using regex to match all keys with the projectId followed by a wildcard. But I couldn't really find a way to do this.

Is what I'm trying to accomplish possible? If so, how do I do this?

Dufour answered 6/12, 2016 at 10:4 Comment(0)
D
4

Instead of using the annotations to find a key by a partial key, I've created a bean that manages the keys for me

  1. I removed all the @CacheEvict annotations.
  2. Created a new service that'll manage the eviction for all of our caches

    public interface CacheEvictionService {
        /**
         * Adds the provided key to a global list of keys that we'll need later for eviction
         *
         * @param key the cached key for any entry
         */
        void addKeyToList(String key);
    
        /**
         * Find keys that contain the partial key
         *
         * @param partialKey the cached partial key for an entry
         * @return List of matching keys
         */
        List<String> findKeyByPartialKey(String partialKey);
    
        /**
         * Evicts the cache and key for an entry matching the provided key
         *
         * @param key the key of the entry you want to evict
         */
        void evict(String key);
    
    }
    
    @Service
    public class CacheEvictionServiceImpl implements CacheEvictionService {
        LinkedHashSet<String> cachedKeys = new LinkedHashSet<>();
    
        @Override
        public void addKeyToList(String key) {
            this.cachedKeys.add(key);
        }
    
        @Override
        public List<String> findKeyByPartialKey(String partialKey) {
            List<String> foundKeys = new ArrayList<>();
            for (String cachedKey : this.cachedKeys) {
                if (cachedKey.contains(partialKey)) {
                    foundKeys.add(cachedKey);
                }
            }
            return foundKeys;
        }
    
        @Override
        @CacheEvict(value = {"valueCodes", "fieldCodes", "qi", "fieldNames", "fieldsByType"}, key = "#key")
        public void evict(String key) {
            this.cachedKeys.remove(key);
        }
    }
    
  3. Instead of using multiple keys, concatenate the different keys into a single string

    @Cacheable(cacheNames = "valueCodes", key = "#value.concat(#fieldId).concat(#projectId)")
    
  4. Send the key to the service every time something is cached

    cacheEvictionService.addKeyToList(StringUtils.join(value, fieldId, projectId));
    
  5. Loop over every existing key that contains the project id (or any other key)

    for (String cachedKey : cacheEvictionService.findKeyByPartialKey(projectId)) {
        cacheEvictionService.evict(cachedKey);
    }
    
Dufour answered 9/8, 2017 at 9:54 Comment(3)
This is super dangerous. For one LinkedHashSet is not a concurrent collection so you will need to synchronize on it. Second even if you do there are probably weird edge cases with the concat accidentally duplicated keys.Monohydroxy
this will not evict entries that are cached on other application nodes.Fremd
Think about when your cache entries grow to 10k or 100k (or for each keys, it becomes exponential complexity) and the solutions will slow down much much more than "No Caching" anythings.Keto
P
8

Is what I'm trying to accomplish possible? If so, how do I do this?

What you like to do is not possible.

In general a cache acts like a hash table, you can only operate on a unique key. Selecting everything which belongs to a project id would require an index and a query mechanism in the cache. Some caches have that, but not all, and there is no common standard how this is done.

Double check whether it really makes sense to cache all the bits and pieces that belong to a project separately. If everything needs to be evicted together, maybe it is used together all the time. Alternatively, for example, keep a ConcurrentHashMap as value in the cache which holds the various components belonging to a project.

More on this, see the question: What is the better option for a multi-level, in-process cache?

Probably it makes sense to drop annotations and use a cache directly. The options with annotations are limited.

Peugia answered 6/12, 2016 at 16:42 Comment(1)
I've found a simple way to do this, but requires a little more code on my end. In case you are interested in the solution, Take a look at my solutions.Dufour
D
4

Instead of using the annotations to find a key by a partial key, I've created a bean that manages the keys for me

  1. I removed all the @CacheEvict annotations.
  2. Created a new service that'll manage the eviction for all of our caches

    public interface CacheEvictionService {
        /**
         * Adds the provided key to a global list of keys that we'll need later for eviction
         *
         * @param key the cached key for any entry
         */
        void addKeyToList(String key);
    
        /**
         * Find keys that contain the partial key
         *
         * @param partialKey the cached partial key for an entry
         * @return List of matching keys
         */
        List<String> findKeyByPartialKey(String partialKey);
    
        /**
         * Evicts the cache and key for an entry matching the provided key
         *
         * @param key the key of the entry you want to evict
         */
        void evict(String key);
    
    }
    
    @Service
    public class CacheEvictionServiceImpl implements CacheEvictionService {
        LinkedHashSet<String> cachedKeys = new LinkedHashSet<>();
    
        @Override
        public void addKeyToList(String key) {
            this.cachedKeys.add(key);
        }
    
        @Override
        public List<String> findKeyByPartialKey(String partialKey) {
            List<String> foundKeys = new ArrayList<>();
            for (String cachedKey : this.cachedKeys) {
                if (cachedKey.contains(partialKey)) {
                    foundKeys.add(cachedKey);
                }
            }
            return foundKeys;
        }
    
        @Override
        @CacheEvict(value = {"valueCodes", "fieldCodes", "qi", "fieldNames", "fieldsByType"}, key = "#key")
        public void evict(String key) {
            this.cachedKeys.remove(key);
        }
    }
    
  3. Instead of using multiple keys, concatenate the different keys into a single string

    @Cacheable(cacheNames = "valueCodes", key = "#value.concat(#fieldId).concat(#projectId)")
    
  4. Send the key to the service every time something is cached

    cacheEvictionService.addKeyToList(StringUtils.join(value, fieldId, projectId));
    
  5. Loop over every existing key that contains the project id (or any other key)

    for (String cachedKey : cacheEvictionService.findKeyByPartialKey(projectId)) {
        cacheEvictionService.evict(cachedKey);
    }
    
Dufour answered 9/8, 2017 at 9:54 Comment(3)
This is super dangerous. For one LinkedHashSet is not a concurrent collection so you will need to synchronize on it. Second even if you do there are probably weird edge cases with the concat accidentally duplicated keys.Monohydroxy
this will not evict entries that are cached on other application nodes.Fremd
Think about when your cache entries grow to 10k or 100k (or for each keys, it becomes exponential complexity) and the solutions will slow down much much more than "No Caching" anythings.Keto

© 2022 - 2024 — McMap. All rights reserved.