Microservices, Event sourcing & CQRS — Part 3

Vinodhini Chockalingam
5 min readJun 7, 2019

--

To continue where we left, let us take the same e-commerce architecture where we have Order Service, Payment Service, Stock Service, Delivery Service. We need to maintain data consistency across all these services.

One way to achieve consistency to let all these service maintain its own local transaction and notify other service who depend on them. In the event of a failure, we could have them notify the same and have the dependent service execute a compensating transaction to undo the changes — More like Command & Undo in Command Pattern. This is the idea of Saga Pattern.

A saga is a sequence of local transactions. Each local transaction updates the database and publishes a message or event to trigger the next local transaction in the saga. If a local transaction fails because it violates a business rule then the saga executes a series of compensating transactions that undo the changes that were made by the preceding local transactions.

From microservice.in saga

There are 2 ways of achieving this :

  • Event-Based/Choreography — each local transaction publishes domain events that trigger local transactions in other services. Let’s look at a more sophisticated example.
The first service executes a transaction and then publishes an event. This event is listened to by one or more services, which execute local transactions and publish new events. The distributed transaction ends when the last service executes its local transaction and does not publish any events, or the event published is not heard by any of the saga’s participants.

In case Stock Service failed, the transaction is rolled back by compensating transaction

In this case, when the product is out of stock, Order Service should know that it should send a REFUND_EVENT to Payment Service and also to cancel the order.

This approach works fine if your transaction involves 2 to 4 steps. However, this approach can rapidly become confusing if you keep adding extra steps in your transaction, as it is difficult to track which services listen to which events. Moreover, it also might add a cyclic dependency between services, as they have to subscribe to one another’s events.

  • Orchestrator approach — we define a new service which is solely responsible for telling each participant what to do and when.
In the case above, Order Saga Orchestrator knows what is the flow needed to execute a “create order” transaction.
If anything fails, it is also responsible for coordinating the rollback by sending commands to each participant to undo the previous operation.

For the Saga pattern to work,

  • You need to create a Unique ID per Transaction — improves traceability.
  • Exactly one event — Idempotent. You would not want duplicate orders to be executed.
  • Avoiding Synchronous Communications — Enrich the message with everything needed for each operation to be executed. The whole goal is to avoid synchronous calls between the services just to request more data. It will enable your services to execute their local transactions even when other services are offline.

There is another problem here. In order to be reliable, a service must atomically update its database and publish a message/event.

There are 2 solutions to this :

  • Transaction Log Tailing — Treat Transaction log as a source of truth. Have a process that is examining the database transaction log for changes. For example, this process will see that an order was inserted into the order table and then publish an order created event.

Drawbacks : It is very database specific. The approach that you use for Oracle is wildly different than the approach that used for Mongo or for DynamoDB. Also here we have the low-level changes to the database tables, which are down here in terms of an abstraction, versus high-level business events, which is what we really need. You somehow have to figure out from the low-level changes what’s really happened at a business level. And that can be kind of tricky.

  • Second approach is to make the application responsible for publishing events but not directly via the message broker. Instead, as part of a local transaction — so that gives you ACID semantics — the order service would update the order table and then insert an event into an events table. And that’s guaranteed to be atomic.

Then there is an event publisher that’s querying the event table, finding new events, publishing them to a message broker, and then actually marking the events as being published. By making the application is publishing the events, it has become high level unlike our previous approach.

Downside is the application has to publish events and has to remember to publish events. So it’s potentially error prone. If you forget to publish an event, then you get inconsistencies. Also, if the Event publisher published the event to the broker but then crashed before updating the DB, when it comes back up it would republish the same message thus resulting in duplicate events. The consumer of the message broker should be made idempotent — perhaps by de-duplicating based on the ID here 84784.

We have so many ways of achieving Eventual Consistency. There is another acronym for that BASE — Basically Available, Soft state, Eventually consistent — Microservice equivalent of ACID.

In the next post, we will enter into Event Sourcing.

--

--

Vinodhini Chockalingam
Vinodhini Chockalingam

Written by Vinodhini Chockalingam

Not a blogger. I mostly move my well-written notes here.

No responses yet