Microservices, Event sourcing & CQRS — Part 1
We are going to brush few concepts related to micro-services, event sourcing, CQRS. But before we get to how they all relate to each other, let’s have a look at the forces to move from monolith to micro-service architecture.
If your business needs to innovate faster, you need to develop more complex, high quality software faster. The need for micro-services arises when,
- You do not need to redeploy entire monolith to change one component — redeploying monolith means long running QA cycles and requires lot of co-ordination within the team when multiple component changes.
- You need to deploy multiple instances of an application for scalability and availability. Application can be memory intensive, CPU intensive, IO intensive. Monolith has components that can each fit into this category. Splitting them into micro service would allow you to scale as per the need.
- Ever developed a monolith? Overloaded IDE, increasing application server startup time, not-so productive developer.
- In longer run you would commit to one tech stack in a monolith since you are afraid to move away (because changes are not that easy)
- Splitting into micro-services also improves fault isolation. You do not have the entire monolith failing because of memory leak. This memory intensive component when split into a micro-service, can scale as per the memory needs.
Transition from monolith
That said, it is not easy to transition from a monolith to micro-service architecture. There are various partition strategies.
The gist is that components that change for the same reason should be packaged together — Common Closure Principle.
You cannot do big-bang transition.
“The only thing a Big Bang rewrite guarantees is a Big Bang!”
So start small. One service at a time. This aligns with a metaphor called Strangler application
They [strangler figs] seed in the upper branches of a tree and gradually work their way down the tree until they root in the soil. Over many years they grow into fantastic and beautiful shapes, meanwhile strangling and killing the tree that was their host.
For this to work, you need to have a glue code that integrates the monolith with this new service you have extracted. This glue code is called Anti-corruption layer — prevent your legacy system from polluting your pristine code.
A greenfield development is not easy either. There is no code. No design. It requires a lot of upfront designing and prototyping to understand what you are partitioning. Also, you will not see the benefit of microservices in the beginning. Allocating JVM for 10 services each with just 100 lines of code would look like an overkill when you are starting a business. But when your business grows, refactoring a monolith is painful. There is a way out for this problem. Designing your system using event sourcing leads to modular domain model which works well for monolith and micro-service as well. So you can easily migrate to microservice when you find the need to. More on that later.
Enough selling micro-services. Everything comes with a price. When you move to microservices,
- Developing a feature that span multiple services requires careful coordination.
- Each of these micro services can be written using different languages. You need to apply communication patterns so that they can communicate with each other & deployment strategies so that application written in different languages/framework versions are deployed and scaled independently. That itself comes with additional service discovery and service registration.
- Transaction can span across multiple services each with different database. And these databases can differ across services as well. Every service can choose a DB that suits their needs — Mongo, Postgres, Key Value store, Graph, etc.
It is this last point I want to dwell on. Transaction Management in distributed world. We will how that connects to Event sourcing and CQRS in the next post.