Migrating Legacy Monoliths to Microservices 

A short discourse on the topic of moving monolith legacy systems towards a modern Microservice based architecture. Inspired by the talk given by Vaughn Vernon titled “Rethinking Legacy and Monolithic Systems” (source: infoq), which is shared below.

 

Legacy monolithic systems are a bit like the imperial star destroyers in Star Wars: Large and complex, slow to maneuver, unable to land and function on planets, and highly expensive to replace if destroyed. Modern microservices are more akin to the x-wing fighters or the multi-troop transport ships, which are very good at doing a single job well, available in large numbers in order to scale as needed, and are relatively low in cost to replace/change.

 

Imperial star destroyer – a monolith of functionality and resources (source)

 

Multi-troop transport – Best at deploying B1 battle droids only (source)

Maybe not the best of analogies provided above, but monoliths are mostly legacy systems that are still in operation because they still bring great value to the business, and may even be the reason the business exists and survives. These systems are mostly found in the larger enterprises – the fortune 500’s and the multinational corporations, but have also existed in small to medium businesses throughout the years. They sit and churn out value and become so hard to change or replace to the point where there are cases when enterprises sometimes bring out COBOL developers in retirement and pay them huge sums to maintain and update the systems with business changes.

Microservices are a relatively new approach to how we design complex systems, and also reflect units of deployment of the system in production. Although the advantages of microservices is well discussed and presented in many books, talks, and real life case studies, something that is not very apparent up-front is the ‘why’ and ‘when’ should you migrate a legacy monolith to a microservices architecture, and the best approach to do so based on the context of the business domain.

In my work experience in the past, I have encountered two types of monoliths:

  1. One was a huge system that was deployed each time a code line had to change, but had well defined interfaces through which components and sub-modules talked to each other. And the software components within this monolith were structured based on domain driven design bounded contexts, and spoke to each other via a message bus, and this made it very easy to maintain, but it was still a monolith.
  2. The other was essentially a big ball of mud. In fact, the opposite had happened, where the system was historically a collection or product suite of different applications, but all of that had been merged into one code base down the line. Though having a semblance to layered architecture, most of the code never followed the architecture and there were no business domain verticals defined/reflected in the code for easy maintainability.

Sam Newman (author of Building Microservices) describes that a bounded context is a good fit for the size of functionality which can be wrapped into a microservice and be treated as a unit of deployment. This is important for two main reasons: first, a bounded context scopes business functionality that can be grouped together under a common context, and second, this common context is decided by the ubiquitous language. Because the ubiquitous language is basically the business domain, the design of bounded contexts, and ultimately microservices, is linguistically driven in the language of the business.

In the above examples of the two types of monoliths I have worked on, the big ball of mud monolith which followed no proper architecture has to undergo a lot of refactoring internally, as a monolith, before any consideration can be given to break it apart. This type of system is very hard to migrate, and generally are overhauled down the line for complete re-writes. The other type of monolith, with well defined bounded contexts and communication interfaces, was more interesting, which begins resembling a microservices architecture but is not quite there yet. In either case, to migrate the monolith to a microservices architecture, one very economical approach we can use is the Strangler approach.

When using the strangler pattern, what we are essentially trying to do is to write any new functionality outside of the monolith in a separate application. This could be new features to the system, or existing features from the monolith, or even a complete bounded context located within the monolith. The new application is connected to the monolith via an anti-corruption layer which is so called because it protects the domain model of the new application/service from the stale or obsolete domain model of the monolith. It also serves to update the new application with the monolith state by streaming events, and also serves as a reverse-migration channel  to the monolith, as data coming in via the new service or application may need to be ingested within the monolith to serve other external consumers of the monolith. The image below shows the principles in action, and the bridge between the ‘glue code’ shown below is basically the anti-corruption layer in this case

 

Strangler pattern in action (source)

 

This is in a nutshell, how a ‘strangler application’ works to remove (or strangle out) the old functionality from the monolith, into a new service. Extracting a piece of functionality from the monolith and creating new external services sounds trivial initially, but there are complexities in building an anti-corruption layer and keeping the new service(s) and monolith in sync. There are more strategies such as Event Interception, Asset Capture, and Content Based Routing that aid in setting up a strangler migration, but the trade-off of this initial complexity is that you

  • spread the development effort over time of migrating the monolith into microservices
  • you do not disrupt existing business operations
  • you can economically show incremental business value compared to a full rewrite
  • And in general, this approach would provide a faster migration path than completely rewriting the application

Why, when, and how would you decide to go about an endeavor such as this? First, there is a famous old adage that states “when you find yourself in a hole, stop digging”. This is because the more you dig deeper, the harder it is to come back up to ground. The same applies to monoliths – the more features and functionality you add to a monolith, the harder it will be to transform and migrate it later. Instead, treat any new set of requirements that come in as the initial strangler application to kick start the controlled migration into a new service. This may need a lot of management buy-in (specially with in-house enterprise IT departments, which I can attest to), but the argument is for a slice of functionality (which is more economically viable) not a complete re-write.

If not a major new feature, an existing bounded context within the monolith can be used for extracting out into a new service as a strangler. In this case, the bounded contexts should be ordered by their size and business value, and the smallest one with lowest business value from this list should be selected first, and migrated into the new strangler app. This gradual migration done right, will result in the new microservices architecture required by the business providing maintainability and business agility, to support enterprise transformation which would not have been possible with the monolith in place. Any type of transformation and change in IT should provide value to the business, and that includes migrating legacy systems into modern microservices. If there is inherent value, but it is not transparent enough to observe due to the approach used in migrating the monolith, then the migration is a failure even before it has started.

Following is the video by Vaughn Vernon, which made me remember similar work I had done in past tenures, and also inspired me to put my thoughts into this blog post:

 

 

Advertisements