LMAX Disruptor is generally implemented using the following approach:
As in this example, Replicator is responsible for replicating the input events\commands to the slave nodes. Replicating across a set of nodes requires us to apply consensus algorithms, in case we want the system to available in the presence of network failures, master failure and slave failures.
I was thinking of applying RAFT consensus algorithm to this problem. One observation is that: "RAFT requires that the input event\commands are stored to the disk (durable storage) during replication" (Reference this link)
This observation essentially means that we cannot perform a in-memory replication. Hence it appears that we might have to combine the functionality of replicator and journaller to be able to successfully apply RAFT algorithm to LMAX.
There are two options to do this:
Option 1: Using the replicated log as input event queue
- The receiver would read from the network and push the event onto the replicated log instead of the ring buffer
- A separate "reader" can read from the log and publish the events onto the ring buffer.
- The log can be replicated across nodes using RAFT. We do not need the replicator and journaller as the functionality is already accomplished by RAFT's replicated log
I think a disadvantage of this option has got to do with fact that we do an additional data copy step (receiver to event queue instead of the ring buffer).
Option 2: Use Replicator to push input events\commands to slave's input log file
I was wondering if there is any other solution to design of Replicator? What are the different design options that people have employed for replicators? Particularly any design that can support in-memory replication?