I've recently asked a number of questions regarding TVar
, and I still have concerns about livelock.
So I thought of this structure:
- Each transaction gets a unique priority (perhaps allocated in order of creation).
- Transactions attempt to get read/write locks on data they access. Naturally, simultaneous reads are okay, but one write lock excludes all others (both read and write).
- Say transaction A has higher priority than transaction B. If A holds the lock, B waits, but if B holds the lock and A wants it, B is booted from the lock, A obtains it, and transaction B restarts (like with
TVar
). B however keeps its current priority for the retry. - When a lock is freed and there are transactions waiting, it goes to the highest priority transaction, and the rest continue to wait.
This system I believe prevents deadlocks, but also prevents starvation (unlike TVar
). I was wondering if anyone has implemented such a system, as it seems fairly obvious and I don't want to reinvent the wheel.
Of course, such an approach could easily be extended to allow the user to specify priorities.
Priority could be the pair (user_supplied_prio, auto_increment)
, with user_supplied_prio
taking precedence, but equal priority tasks resolving in FIFO order.
Comment/Solution:
Actually, when I think about it, what I describe already exists in Haskell, simply by using one IORef
wrapped around all the data, and only using atomicModifyIORef
. The atomicModifyIORef
will ensure transactions are applied in sequence. However, one may think that this means that the data structure is sequential (i.e. effectively limited to one thread) but it is actually parallel due to laziness.
To explain this, consider an expensive function f
. We are going to apply this to a Data.Map
to the data with the key "foo".
This replaces (foo -> x)
with (foo -> future(f x))
. This thread shall continue to work out what (f x)
actually is, but in the meantime we can apply g to "bar". Since applying g to "bar" does not need the result of "foo", we can work this out at the same time.
No deadlocks, no starvation, eventually all transactions will be processed (roughly in the order they are received).
MVar
s? Threads are guaranteed queued fairly in FIFO fashion when blocked on an MVar, so if you only want to prioritize based on creation time, that seems logical. Can you give a simple use case that would illustrate the problem you're looking to solve with your structure? – Aldridge