This thread is a bit old but I hit this recently and I thought I'd leave this answer hoping it helps.
With async, there are few things to keep in mind:
- An "uber lock" approach is not quick as it blocks the factory operations on other keys when performing one on a key.
- A "lock per key" (
SemaphoreSlim
) has 2 things going on: a. It's disposable, so disposing it after could race. b. Live with not disposing it.
I chose to solve it using a pool of locks. It's not required to have a lock per key, but just enough locks as the max active threads possible. I then assign the same lock to the key through hashing. The pool size is a function of ProcessorCount
. The valueFactory
is executed only once. Since multiple keys map to a lock (a key always maps to the same lock), operations for keys with the same hash will get synchronized. So this loses some parallelism, and this compromise may not work for all cases. I'm OK with this compromise. This is the approach that LazyCache
, and FusionCache
(one of its many approaches) use, among other things. So I'd use one of them, but it's good to know the trick though as it's pretty nifty.
private readonly SemaphoreSlimPool _lockPool = new SemaphoreSlimPool(1, 1);
private async Task<TValue> GetAsync(object key, Func<ICacheEntry, Task<TValue>> valueFactory)
{
if (_cache.TryGetValue(key, out var value))
{
return value;
}
// key-specific lock so as to not block operations on other keys
var lockForKey = _lockPool[key];
await lockForKey.WaitAsync().ConfigureAwait(false);
try
{
if (_cache.TryGetValue(key, out value))
{
return value;
}
value = await _cache.GetOrCreateAsync(key, valueFactory).ConfigureAwait(false);
return value;
}
finally
{
lockForKey.Release();
}
}
// Dispose SemaphoreSlimPool
And here's the SemaphoreSlimPool
impl (source, nuget).
/// <summary>
/// Provides a pool of SemaphoreSlim objects for keyed usage.
/// </summary>
public class SemaphoreSlimPool : IDisposable
{
/// <summary>
/// Pool of SemaphoreSlim objects.
/// </summary>
private readonly SemaphoreSlim[] pool;
/// <summary>
/// Size of the pool.
/// <para></para>
/// Environment.ProcessorCount is not always correct so use more slots as buffer,
/// with a minimum of 32 slots.
/// </summary>
private readonly int poolSize = Math.Max(Environment.ProcessorCount << 3, 32);
private const int NoMaximum = int.MaxValue;
/// <summary>
/// Ctor.
/// </summary>
public SemaphoreSlimPool(int initialCount)
: this(initialCount, NoMaximum)
{ }
/// <summary>
/// Ctor.
/// </summary>
public SemaphoreSlimPool(int initialCount, int maxCount)
{
pool = new SemaphoreSlim[poolSize];
for (int i = 0; i < poolSize; i++)
{
pool[i] = new SemaphoreSlim(initialCount, maxCount);
}
}
/// <inheritdoc cref="Get(object)" />
public SemaphoreSlim this[object key] => Get(key);
/// <summary>
/// Returns a <see cref="SemaphoreSlim"/> from the pool that the <paramref name="key"/> maps to.
/// </summary>
/// <exception cref="ArgumentNullException"></exception>
public SemaphoreSlim Get(object key)
{
_ = key ?? throw new ArgumentNullException(nameof(key));
return pool[GetIndex(key)];
}
private uint GetIndex(object key)
{
return unchecked((uint)key.GetHashCode()) % (uint)poolSize;
}
private bool disposed = false;
public void Dispose()
{
Dispose(true);
}
public void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
if (pool != null)
{
for (int i = 0; i < poolSize; i++)
{
pool[i].Dispose();
}
}
}
disposed = true;
}
}
}
I've thrown quite a bit of threads on this with a lot of churn due to low ttl and it's not bombing out. So far it looks good to me, but I'd like to see if anyone can find bugs.
SempahoreSlim
with count 1, it has asynchronous wait msdn.microsoft.com/en-us/library/hh462805(v=vs.110).aspx – LamellibranchMemoryCache
thread safe. Or did I miss something? – Reboant