I'd like to use 2 caches -- the in memory default one and a memcache one, though abstractly it shouldn't matter (I think) which two.
The in memory default one is where I want to load small and rarely changing data. I've been using the memory one to date. I keep a bunch of 'domain data' type stuff from the database in there, I also have some small data from external sources that I refresh every 15 min - 1 hour.
I recently added memcache because I'm now serving up some larger assets. Sort of complex how I got into this, but these are larger ~kilobytes, relatively small in quantity (hundreds), and highly cacheable -- they change, but a refresh once per hour is probably too much. This set might grow, but it's shared across all hosts. Refreshes are expensive.
The first set of data has been using the default memory cache for a while now, and has been well-behaved. Memcache is perfect for the second set of data.
I've tuned memcache, and it's working great for the second set of data. The problem is that because of my existing code that was done 'thinking' it was in local memory, I'm doing several trips to memcache per request, which is increasing my latency.
So, I want to use 2 caches. Thoughts?
(note: memcache is running on different machine(s) than my server. Even if I ran it locally, I have a fleet of hosts so it wouldn't be local to all. Also, I want to avoid needing to just get bigger machines. Even though I probably could solve this problem by making the memory bigger and just using the in memory (the data really isn't that big), this doesn't solve the problem as I scale, so it will just be kicking the can.)