In this article, I’ll discuss what microservices are, where the concept comes from, and why they’re important. We’ll start with a brief history of microservices and how they compare to a monolithic system design. Then, we’ll discuss some of the principles that underpin microservice architectures, their potential downsides, and how modern tools like containers and Kubernetes fit in.
How Microservices Came to Be
Microservices emerged when organizations started to build more complex applications and the practice of writing monolithic apps became increasingly problematic.
Traditionally, applications were built as monoliths where all code was lumped into one big codebase. With no clear separation between different functionalities, when you update one part of the app, you may inadvertently affect a totally different one. To roll out a simple change, you had to re-deploy the entire application and, if something went wrong, everything was impacted, not just the component you wanted to update or scale.
While the former can be addressed by breaking monoliths down into modules (semi-independent components) — an approach that has been around for decades — it never really caught on even though it’s likely much simpler than implementing microservices.
Nevertheless, engineers did start modularizing applications. From object-based architectural styles to service-oriented architectures (SOA) to microservices, application architecture became increasingly decoupled (to learn more about software architecture, check out this primer on the topic).
SOA got quite a bit of traction but largely failed, mainly because it left lots of unanswered questions, such as how to correctly split services up. A microservices-based architecture is a more prescriptive type of SOA that emerged from real-world use cases and has been successfully adopted by numerous organizations.
Independent cloud consultant Sam Newman argues that microservices are nothing more than a modular architecture where the modules run on different processes that communicate via networks. That makes microservices the latest culmination of the continuous architectural evolution in search of a decoupled system.
What are Microservices?
Microservices are small, autonomous app components that together form an application. They inherited their basic operating model from SOA but extended it in a more prescriptive way. Generally considered to be an independent part of a codebase, microservices are maintained by a single team.
Why are they important? To update an application, microservices can be updated and deployed independently — you don’t have to re-deploy the entire application. They also allow individual app teams to focus entirely on a single business process without needing to understand the entire application.
What characterizes a microservice? On a high level, all microservices should remain independently deployable — the core design philosophy behind microservices. To support that, microservices have the following properties:
- Loose coupling: Each service is autonomous and only loosely connected to the rest of the system. That means it has its own lifecycle and is independently deployed, updated, scaled, and deleted.
- High cohesion: Code with related behavior is grouped together. You don’t want to spread behavior across the app. Otherwise, each time you want to update the behavior, you have to update different parts within your app, each representing an additional release. Not only is this more time-consuming, but it also increases risk. By grouping all related behavior together, engineers update code in one place only whenever they want to change a particular behavior. For a deeper dive into this topic please see Domain-Driven Design (DDD).
- Information hiding: Each microservice only shares data other services need and hides data only relevant to its own processes. Data sharing can inadvertently lead to coupling and should therefore always be deliberate.
To function as one cohesive app, all these different autonomous services communicate over a network through network interfaces. This network and the amount of communication happening over it introduces new challenges. This, by the way, is where service meshes fit it. But I’ll cover those in a separate article.
Now that we know what microservice are, let’s explore why organizations are adopting them.
Whether addressing “people problems” by aligning services to teams, speeding up innovation by reducing risk of new tech adoption, or easing deployment and scalability, adopting microservices provides numerous benefits. Let’s have a closer look:
- Autonomous teams: Microservices allow small teams to take full ownership of the entire lifecycle of a service. This increases accountability, code quality, and job satisfaction. For most large organizations this “people side” is one of the main reasons for adopting a microservices approach.
- Technology heterogeneity: Developers can theoretically build each service in a different language and with different technologies. This enables developers to select the best technology for that particular service versus the more traditional standardized, one-size-fits-all approach. That being said, using too many different technologies does increase overhead and many organizations restrict them to counter that.
- Reduced risk of new tech adoption: Developers can also experiment with new technologies in low-risk services knowing that if something goes wrong, it won’t impact the rest of the system. Since risk is the biggest barrier to adopting new technologies, this is a huge advantage.
- Resilience: When a failure in a component occurs, it won’t necessarily cascade into other parts of the system. The problem remains isolated in that particular service, allowing each app component to become its own failure domain. But note, an application is only as resilient as its architecture allows it to be. Without good code practices like tracing, observability, and circuit breaking small failures can still cascade through complex systems.
- Scalability: To scale any one function, you simply scale that microservice versus scaling the entire monolithic application.
- Ease of deployment: To update a line of code, you only update and redeploy that particular microservice versus redeploying the entire monolith. Conversely, rolling back a service is a lot easier than rolling back an entire app. Tools like Docker, OCI containers, and Kubernetes have dramatically reduced the costs of rollouts and rollbacks.
- Replaceability: Replacing a microservice that is part of a mission-critical app is a lot easier (and less scary) than replacing a mission-critical monolith. Microservices can be re-written or updated one by one until the entire system is updated, significantly reducing the risk of modernizing a huge monolith all at once.
Whether a microservices implementation succeeds largely depends on how services are grouped. As mentioned above, one of the reasons SOA implementations struggled was their lack of guidance defining service boundaries. Let’s look at how microservices address that.
Breaking a Business Domain into Bounded Contexts
Each microservice has a specific function modeled around a business domain. Business domains solve a specific business problem, the overarching goal of the app. Take Gmail. Its business domain comprises all functionalities that enable people around the world to communicate over email.
A business domain is made of multiple bounded contexts: code related to the same app behavior. Gmail has multiple features including compose, send and receive, archive, search, etc. which could all potentially form such a bounded context. Let’s look at compose.
To compose an email, you need multiple functionalities, including text editing, autocorrection, formatting, and so on. All these functions likely have code related to the same app behavior forming a bounded context. Functions within a bounded context also have to be highly aware of each other. Autocorrection, for example, must know of every single character I type in order to function correctly. These bounded contexts represent natural microservices boundaries.
But note, related behavior doesn’t necessarily align one-to-one with features. There could be similar behavior that crosses feature barriers. So keep in mind that microservice boundaries can often be complex and are determined by individual judgement calls. Bounded contexts and individual features may not always line up.
Collaborating in a Decoupled Manner
Microservices are composed of highly related behavior bundled together into a container creating an independent unit. But even if properly containerized, there is still a risk of coupling. These containerized services must communicate, or integrate, with one another to collaborate and that integration can be a source of coupling, too.
Decoupling your system is all about being able to independently change parts of it without affecting other parts of the system.
The fewer services need to know about one another, the more autonomous they are. With greater autonomy comes greater resiliency. Ideally, if one service crashes, the other services will still be able to provide a degraded version of the app.
While a decoupled system is the ultimate goal, 100% decoupling is not always possible. Services communicate in different ways and which technique you use is really determined by the application itself.
The Role of Network Communication
Microservices communicate over a network through their application programming interface (API). To send and receive messages, they must agree on how these messages are packaged. These package rules are determined by protocols, which are your “network communication rules.” You’re probably familiar with HTTP. That’s a protocol typically used over the web. There are many more such protocols.
How communication is coordinated differs. You can broadly categorize them in synchronous or asynchronous communication.
Synchronous is a little bit like a landline. You establish a connection and exchange information, and while you’re connected, you can’t take any other calls. This type of communication is often used with request/response messages where one service sends a request and waits for the other service to respond. While it waits, both services are blocked. As you can imagine, this is only feasible if the connection is lightning fast.
Services that communicate this way often behave in an orchestrated fashion. Orchestrated refers to systems where some services are “smarter” than others, telling them what to do.
In terms of coupling, synchronous communication with orchestrated behavior doesn’t allow for total decoupling. Services are aware of one another and provide each other direction. If one of the “smart services is down, the “dumb” service may not know what to do.
Asynchronous communication is more like email. You send someone an email and generally get on with your life. Once you receive a response, you engage again. That’s the essence of asynchronous communication: a service sends a message and carries on with whatever it does until a response is received. This communication style is often used when the network is unreliable or physically distant. It is typically used with publish-subscribe (or pub-sub) patterns where one service publishes an event and whoever subscribes to it is notified.
This type of communication allows for choreographed service behavior. In choreographed systems, the smarts are more evenly distributed across all services and each service understands its role within the system.
A pub-sub-based system with choreographed behavior could be seen as the “Cadillac of the decoupled systems.” One service simply publishes events and doesn’t know who subscribes to it. The other service subscribes to events managed by a so-called message broker. Neither of them is truly aware of one another. Removing or adding a service won’t even be noticed by other services. These systems are incredibly flexible and can be updated with minimal risk.
So why aren’t all systems choreographed pub-sub systems? Well, that flexibility comes at a price. Building such a system is complex and requires a lot of time and effort. Once up and running, however, it’s likely your best option.
Not all applications justify such an effort. That’s why you’ll also see highly modern microservices-based apps that use asynchronous and/or orchestrated systems.
When Should I Use a Microservices Architecture?
Developing and maintaining microservice-based applications is a lot more work than dealing with well-designed monoliths. We’ve seen that microservices have lots of powerful benefits, but is it always the best approach? No, app owners should always default to writing monoliths unless they have a compelling reason not to.
As a rule of thumb, small apps with small teams are best served with monolithic architectures while big applications with numerous teams working on them simultaneously are likely better off with a microservices approach. Start out with a monolithic app and break it into microservices once you need the scaling, performance, or resiliency benefits and when those benefits outweigh the additional cost in terms of complexity and compute resources. When that breaking point is, will largely depend on your use case. With no silver bullet, you’ll have to make that decision after careful deliberation.
What you can do early on, is keep a clean and well-modularized codebase. That will make it easier to build and scale as you start running and scaling your app and it will reduce your costs and effort when you start breaking your monolith down into microservices.
How Do Containers and Kubernetes Fit in?
You’ll hear a lot about containers and Kubernetes in the context of microservices. Let’s explore how they relate.
As mentioned above, each microservice is placed in a container, a new-ish packaging mechanism similar in concept to an ultra-lightweight virtual machine (VM) that helps keep microservices separated (note that although containers are conceptually similar to VMs they do not provide the same isolation or security guarantees). While microservices predate containers, containers made microservices much simpler and more cost-efficient.
Kubernetes manages (or orchestrates) your fleet of containerized microservices making sure they have enough resources and are up and running. It functions as some sort of data center operating system for containers.
In short, a microservice contains the business logic, the code providing business value. Containers help package microservices so they are decoupled from the rest of the system. And Kubernetes manages all services running in a particular system. Containers and Kubernetes play a key role in modern microservices-based apps. They simplify the packaging and management of each service and are one of the reasons why microservices are so popular today.
As we’ve seen, microservices evolved through a continuous architectural evolution in search of decoupled systems. While they provide a lot more flexibility than a monolith approach and deliver incredibly powerful abilities, those gains come at the expense of complexity. Organizations must carefully deliberate whether adopting a microservices approach is right for them.
In the context of microservices, you’ll hear a lot about containers and Kubernetes. That’s because they are important technological innovations that provide tremendous value to microservices. Most organizations today using a microservices approach are implementing it with containers and Kubernetes.
To learn more about microservices, check out Sam Newman’s book “Building Microservices.” It’s a great read and provides many more details.
A huge thank you to Sam Newman who took the time to review, provide feedback, and share some of his new perspectives on the topic. Also a big thanks to Jason Morgan for all the input and thoughtful discussions. And thanks to Carol Scott, and Elise Serbaroli for their valuable feedback.
The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE, Docker, Real, Bit.