From Monoliths to Microservices: An Architectural Strategy

Most people outside of IT just don’t get how difficult it is to manage complex enterprise systems. It is a delicate balancing act, one that rests on understanding how any one change will affect the entire system.
New developers spend months learning the system’s codebase before they can even begin to work on it. Even the most knowledgeable of development teams is fearful of making changes or adding new code that would disrupt operation in some unforeseen way, so the most mundane of changes is discussed ad nauseum.
When things go wrong, operations blame development and development blames QA. Project managers blame the budget and everyone else. The business loses confidence in IT and begins to look for outsourcers to replace the internal team.
Unless you’ve been living under a rock, you’ve heard of how microservices can turn this scenario on its head, enabling a new, more agile world in which developers and operations teams work hand in hand to deliver small, loosely coupled bundles of software quickly and safely. Instead of a single monolithic system, functionality is carried out by a smaller set of services coordinating their operations.
How do you make it work? You’ve come to the right place. We’ll explain it all. While a one-size-fits-all approach to adopting microservices cannot exist, it is helpful to examine the base principles that have guided successful adoption efforts.
Adopting Microservices
One common approach for teams adopting microservices is to identify existing functionality in the monolithic system that is both non-critical and fairly loosely coupled with the rest of the application. For example, in an e-commerce system, events and promotions are often ideal candidates for a microservices proof-of-concept. Alternatively, more sophisticated teams can simply mandate that all new functionality must be developed as a microservice.
In each of these scenarios, the key challenge is to design and develop the integration between the existing system and the new microservices. When a part of the system is redesigned using microservices, a common practice is to introduce glue code to help it to talk to the new services.
An API gateway can help combine many individual service calls into one coarse-grained service, and in so doing reduce the cost of integrating with the monolithic system.
The main idea is to slowly replace functionality in the system with discrete microservices while minimizing the changes that must be added to the system itself to support this transition. This is important in order to reduce the cost of maintaining the system and minimize the impact of the migration.
Microservices Architectural Patterns
A number of architectural patterns exist that can be leveraged to build a solid microservices implementation strategy.
In their book “The Art of Scalability,” Martin Abbott and Michael Fisher elaborate on the concept of the “scale cube,” illustrating various ways to think about the use of microservices to more easily scale systems (Figure 1). The microservices pattern maps to the Y-axis of the cube, wherein functional decomposition is used to scale the system. Each service can then be further scaled by cloning (X-axis) or sharding (Z-axis).

Alistair Cockburn introduced the “ports and adapters” pattern, also called hexagonal architecture, in the context of building applications that can be tested in isolation. However, it has been increasingly used for building reusable microservices-based systems, as advocated by James Gardner and Vlad Mettler. A hexagonal architecture is an implementation of a pattern called bounded context, wherein the capabilities related to a specific business domain are insulated from any outside changes or effects.
Examples abound of these principles being put into practice by enterprises migrating to microservices. Click Travel open sourced its Cheddar framework, which captures these ideas in an easy-to-use project template for Java developers building applications for Amazon Web Services. SoundCloud, after a failed attempt at a big-bang refactoring of their application, based their microservices migration on the use of the bounded context pattern to identify cohesive feature sets which did not couple with the rest of domain.
One challenge faced by teams new to microservices is dealing with distributed transactions spanning multiple independent services. In a monolithic system, this is easy, since state changes are typically persisted to a common data model shared by all parts of the application. This is not the case, however, with microservices.
[cycloneslider id=”ebook-2-sponsors”]
Having each microservice managing its own state and data introduces architectural and operational complexity when handling distributed transactions. Good design practices, such as domain-driven design, help mitigate some of this complexity by inherently limiting shared state.
Event-oriented patterns such as event sourcing or command query responsibility segregation (CQRS) can help teams ensure data consistency in a distributed microservices environment. With event sourcing and CQRS, the state changes needed to support distributed transactions can be propagated as events (event sourcing) or commands (CQRS). Each microservice that participates in a given transaction can then subscribe to the appropriate event.
This pattern can be extended to support compensating operations by the microservice when dealing with eventual consistency. Chris Richardson presented an implementation of this in his talk at hack.summit() 2014 and shared example code via GitHub. Also worth exploring is Fred George’s notion of “streams and rapids,” which uses asynchronous services and a high speed messaging bus to connect the microservices in an application.

While architectures such as these are promising, it is important to remember that, during the transition from a monolithic system to a collection of microservices, both systems will exist in parallel. To reduce the development and operational costs of the migration, the architectural and integration patterns employed by the microservices must be appropriate to the system’s architecture.
Architectural & Implementation Considerations
Domain Modeling
Domain modeling is at the heart of designing coherent and loosely coupled microservices. The goal is to ensure that each of your application’s microservices is adequately isolated from runtime side effects of, and insulated from changes in the implementation of, the other microservices in the system.
Isolating and insulating microservices also helps ensure their reusability. For example, consider a promotions service that can be extracted from a monolithic e-commerce system. This service could be used by various consuming clients using mobile Web, iOS or Android apps. In order for this to work predictably, the domain of “promotions,” including its state entities and logic, needs to be insulated from other domains in the system, like “products,” “customers,” “orders,” etc. This means the promotions service must not be polluted with cross-domain logic or entities.
Proper domain modeling also helps avoid the pitfall of modeling the system along technological or organizational boundaries, resulting in data services, business logic and presentation logic each implemented as separate services.
Sam Newman discusses these principles in his book “Building Microservices.” Vaughn Vernon focuses on this area even more deeply in “Implementing Domain-Driven Design.”
Service Size
Service size is a widely debated and confusing topic in the microservices community. The overarching goal when determining the right size for a microservice is to not make a monolith out of it.
The “Single Responsibility Principle” is a driving force when considering the right service size in a microservices system. Some practitioners advocate as small a service size as possible for independent operation and testing. Building microservices in the spirit of
Unix utilities also lead to small codebases which are easy to maintain and upgrade.
Architects must be particularly careful in architecting large domains, like “products” in an e-commerce system, as these are potential monolithic designs, prone to large variations in their definition. There could be various types of products, for example. For each type of product, there could be different business logic. Encapsulating all this can become overwhelming, but the way to approach it is to put more boundaries inside the product domain and create further services.
Another consideration is the idea of replaceability. If the time it takes to replace a particular microservice with a new implementation or technology is too long (relative to the cycle time of the project), then it’s definitely a service that needs further reworking of its size.
Testing
Let’s look at some operational aspects of having the monolithic system progressively transformed into a microservices-based system. Testability is a common issue: during the course of developing the microservices, teams will need to perform integration testing of the services with the monolithic system. The idea, of course, is to ensure that the business operations spanning the pre-existing monolithic system and the new microservices do not fail.
One option here is to have the system provide some consumer-driven contracts that can be translated into test cases for the new microservices. This approach helps ensure that the microservice always has access to the expectations of the system in the form of automated tests. The system’s developers would provide a spec containing sample requests and expected microservice responses. This spec is then used to create relevant mocks, and as the basis for an automated test suite that is run before integrating the new microservices with the existing system.
Pact, a consumer-driven contract testing library, is a good reference for this approach. Creating a reusable test environment that can deploy a test copy of the entire system—and making it available, on-demand, to the microservices teams—
is also useful. This eliminates potential roadblocks for those teams and improves the feedback loop for the project as a whole. A common way of accomplishing this is to containerize the entire system in the form of Docker containers orchestrated through an automation tool like Docker Compose. This deploys a test infrastructure of the system quickly and gives the team the ability to perform integration tests locally.
Service Discovery
A service may need to know about other services when accomplishing a business function. A service discovery system enables this, wherein each service refers to an external registry holding the endpoints of the other services. This can be implemented through environment variables when dealing with a small number of services; etcd, Consul and Apache Zookeeper are examples of more sophisticated systems commonly used for service discovery.
Deployment
Each microservice should be self-deployable, either on a runtime container or by embedding a container in itself. For example, a JVM-based microservice could embed a Tomcat container in itself, reducing the need for a standalone web application server.
At any point in time, there could be a number of microservices of the same type (i.e., X-axis scaling as per the scale cube) to allow for more reliable handling of requests. Most implementations also include a software load balancer that can also act as a service registry, such as Netflix Eureka. This implementation allows for failover and transparent balancing of requests as well.
Build and Release Pipeline
Additional considerations when implementing microservices are quite common, such as having a continuous integration and deployment pipeline. The notable caveat for this in a microservices-based system is having an on-demand, exclusive, build and release pipeline for each microservice. This reduces the cost of building and releasing the application as a whole.
Release practices also need to include the concept of rolling upgrades or blue-green deployment. This means that, at any point of time in a new build and release cycle, there can be concurrent versions of the same microservice running in the production environment. A percentage of the active user load can be routed to the new microservice version to test its operation, before slowly phasing out the old version. This helps to ensure that a failed change in a microservice does not cripple the system. In case of failure, the active load can be routed back to the old version of the same service.
Feature Flags
One other common pattern is to allow for feature flags. A feature flag, like a configuration parameter, can be added to the system to allow toggling a feature on or off. Implementing this pattern in the system would allow us to trigger the use of the relevant microservice for the feature when the flag is turned on. This enables easy A/B testing of features migrated from the monolithic system to microservices. If the existing version of a feature and the new microservice replicating the said feature can coexist in the production environment, a traffic routing implementation along with the feature flag can help the delivery teams more rapidly build the end system.
Developer Productivity During Microservices Adoption
Monolithic architectures are attractive in that they allow quick turnaround of new business features on a tight schedule — when the overall system is still small. However, this becomes a development and operations nightmare as the system grows up.
If working with your monolith was always as painful as it is now, you probably wouldn’t have it. Rather, systems become monoliths because adding onto the monolith is easy at first.
Giving power to developers to choose a “microservices first” approach when building a new feature or system is complicated and has many moving parts. Doing so demands strong disciplines around architecture and automation, which in turn helps create an environment that allows teams to quickly and cleanly build microservices.
One approach to building out this developer infrastructure is to create a standard boilerplate project that encapsulates key principles of microservice design, including project structure, test automation, integration with instrumentation and monitoring infrastructures, patterns like circuit breakers and timeouts, API frameworks, and documentation hooks, among others.
Project templates allow teams to focus less on scaffolding and glue code, and more on building business functionality in a distributed microservices-based environment. Projects like Dropwizard, Spring Boot, and Netflix Karyon are interesting approaches to solving this. Making the right choice between them depends on the architecture and developer skill level.
Monitoring and Operations
Coexisting monolithic systems and microservices require a comprehensive monitoring of performance, systems and resources. This is more pronounced if a particular feature from the system is replicated through a microservice. Collecting statistics for performance and load will allow the monolithic implementation and the microservices-based one replacing it to be compared. This will enable better visibility into the gains that the new implementation brings to the system, and improve confidence in pursuing further migration.
Organizational Considerations
The most challenging aspects of moving from monolithic systems to microservices are the organizational changes required, such as building services teams that own all aspects of their microservices. This requires creating multidisciplinary units which include developers, testers and operations staff, among others. The idea is to embrace more collective code ownership and care for software craftsmanship.
Conclusion
Most of the ideas shared in this article have either been practiced or have delivered results in organizations of all sizes. However, this is not a one-size-fits-all paradigm. Hence, it’s important to keep an eye on evolving patterns and adoption war stories. As more organizations move from monolithic systems to microservices, we will have more to learn along our journey.