Microservices - Is event store technology (in event sourcing solutions) shared between all microservices?
Asked Answered
L

3

9

As far as my little current experience allows me to understand, one of the core concepts about "microservice" is that it relies on its own database which is independent from other microservices.

Diving into how to handle distributed transactions in a microservices system, the best strategy seems to be the Event Sourcing pattern whose core is the Event Store.

Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?

If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?

And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?

A very big big thanks in advance for every piece of advice you can give me!

Lesser answered 27/9, 2018 at 20:8 Comment(0)
H
8

Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?

Every microservice should write to its own Event store, from their point of view. This could mean separate instances or separate partitions inside the same instance. This allows the microservices to be scaled independently.

If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?

Kinda. As I wrote above each microservice should have its own Event store (or a partition inside a shared instance). A microservice should not append events to other microservice Event store.

Regarding reading events, I think that reading events should be in general permitted. Polling the Event store is the simplest (and the best in my opinion) solution to propagate changes to other microservices. It has the advantage that the remote microservice polls at the rate it can and what events it wants. This can be scaled very nice by creating Event store replicas, as much as it is needed.

There are some cases when you would want to not publish every domain event from the Event store. Some say that there are could exist internal domain events on that the other microservices should not depend. In this case you could mark the events as free (or not) for external consuming.

The cleanest solution to propagate changes in a microservice is to have live queries to whom other microservices could subscribe. It has the advantage that the projection logic does not leak to other microservice but it also has the disadvantage that the emitting microservice must define+implement those queries; you can do this when you notice that other microservices duplicate the projection logic. An example of this query is the total order price in an ecommerce application. You could have a query like this WhatIsTheTotalPriceOfTheOrder that is published every time an item is added to/removed from/updated in an Order.

And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?

As many as you need, i.e. until the write succeeds. You could have a limit of 99999, just to be detect when something is horribly wrong with the retry mechanism. In any case, the concurrent write should be retried only when a write is done at the same time on the same stream (for one Aggregate instance) and not for the entire Event store.

Hyperkinesia answered 28/9, 2018 at 8:20 Comment(5)
Thank you! You really cleared a lot of fuzzy ideas in my head.Lesser
Another thing: as @VoiceOfUnreason explained, the various microservices mustn't rely on the fact that the events are store on the same Event Store tecnology. This means that an ideal architecture must have a persistent broker were you publish you message and that message is picked from another "agent" and saved in the Event store. In that is the only way to achieve atomic publish and store, but if the event is picked and store while in the middle is also picked and stored another message, this can give an inconsistent state. Ideas?Lesser
@ChristianPaesante This is a general rule, microservices should not assume anything about eachother technology, however, "message-driven" is not a technology. It's not relevant for ms A how ms B has generated the events but it is relevant that has generated events.Hyperkinesia
My target is to get a good abstraction of the "persistence layer" of my microservice. But the challenge I found out is whether to publish on a message broker before a listen to the broker event to persist them on the Event Store or instead write the event directly on the ES and exploit some notification functionality offered by the ES to get notified by changes. In the first option I can't get consistency, while in the second I can't guarantee different ES tecnologies and ms not knowing about each other...Lesser
@ChristianPaesante definitely write first to the event store. You can abstract it using interfaces.Hyperkinesia
M
4

As a rule: in service architectures, which includes micro services, each service tracks its state in a private database.

"Private" here primarily means that no other service is permitted to write or read from it. This could mean that each service has a dedicated database server of its own, or services might share a single appliance but only have access permissions for their own piece.

Expressed another way: services communicate with each other by sharing information via the public api, not by writing messages into each others databases.

For services using event sourcing, each service would have read and write access only to its streams. If those streams happen to be stored on the same home - fine; but the correctness of the system should not depend on different services storing their events on the same appliance.

Mandell answered 27/9, 2018 at 21:16 Comment(0)
V
2

TLDR: All of these patterns apply to a single bounded context (service if you like), don't distribute domain events outside your bounded context, publish integration events onto an ESB (enterprise service bus) or something similar, as the public interface.

Ok so we have three patterns here to briefly cover individually and then together.

  1. Microservices
  2. CQRS
  3. Event Sourcing

Microservices

https://learn.microsoft.com/en-us/azure/architecture/microservices/ Core objective: Isolate and decouple changes in a system to individual services, enabling independent deployment and testing without collateral impact. This is achieved by encapsulating change behind a public API and limiting runtime dependencies between services.

CQRS

https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs Core objective: Isolate and decouple write concerns from read concerns in a single service. This can be achieved in a few ways, but the core idea is that the read model is a projection of the write model optimised for querying.

Event Sourcing

https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing Core objective: Use the business domain rules as your data model. This is achieved by modelling state as an append-only stream of immutable domain events and rebuilding the current aggregate state by replaying the stream from the start.

All Together

There is a lot of great content here https://learn.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10) Each of these has its own complexity, trade-offs and challenges and while a fun exercise you should consider if the cost outway the benefits. All of them apply within a single service or bounded context. As soon as you start sharing a data store between services, you open yourself up to issues, as the shared data store can not be changed in isolation as it is now a public interface.

Rather try publish integration events to a shared bus as the public interface for other services and bounded contexts to consume and use to build projections of other domain contexts data.

It's a good idea to publish integration events as idempotent snapshots of the current aggregate state (upsert X, delete X), especially if your bus is not persistent. This allows you to republish integration events from a domain if needed without producing an inconsistent state between consumers.

Vitrine answered 17/3, 2021 at 16:27 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.