8 Architectural Considerations to Keep in Mind About Microservices
Over the past several years, the use of microservices as a means of driving agile best practices and accelerating software delivery has become more and more commonplace. In an effort to avoid the pitfalls that come with monolithic applications, microservices aim to break your architecture into loosely-coupled components (or, services) that are easier to update independently, improve, scale and manage.
We see organizations of all sizes, including large enterprises, looking to take advantage of microservices, containers, serverless and other modern architectures — both for greenfield applications, as well as for de-composing their monolithic, legacy applications.
However, as you can imagine, these massive architectural changes do not happen overnight, and have broad implications — from the way you develop, deploy, manage, and monitor your application, through your organization’s culture, team structure, skill set, and more.
How can you make microservices an integral part of your development and software delivery strategy? Here are eight things to keep in mind as you’re planning for microservices.
1. Increased Agility with Higher Density
When it comes to being agile, microservices help you achieve greater flexibility and support faster releases. For a company that has dozens upon dozens of dependent application components, upgrading a monolithic system can be a large undertaking, often requiring simultaneous upgrades to all services at the same time, in a particular sequence of dependencies, that lead to very long release cycles.
However, with microservices, you can update specific services without having to upgrade the entire stack. The result is faster release times, more frequent releases, and incremental improvements to individual services – ultimately delivering a better product to the end user. That is a close description to some of the goals of agile, and so it is easy to see why microservices is a powerful approach to enable agile methodologies.
One of the primary goals for many teams who are using microservices is achieving higher density. This means the ability to run more applications on behalf of more customers on shared hardware. When you are able to leverage shared infrastructure with higher density, it ultimately means reducing costs.
From that perspective, containers are a good fit for microservices. Each microservice is typically focused on a single capability, and containers generally run a single process (=service) on a single port. Since they already come bundled with all the environment configuration that the service needs in order to run, and they can be spun up or torn down quickly (since you’re essentially just starting the process, without having to build an image, install the stack, etc.), they can be granularly scaled and managed to deliver just the right amount of resources for each service to run optimally.
Microservices enable you to be more agile and accelerate time to market, alongside better infrastructure utilization and improved costs. This is the goal for software-driven organizations today.
2. Remember to Consider Form Factor
In the past, it was commonly accepted that containers were the only delivery vehicle for microservices. But in recent years, that perception has shifted in conjunction with the evolving use cases for microservices as well as the “serverless movement.” Today, there is a new set of frameworks that allow you to use functions and source code directly as the unit of execution. While in many instances, containers are still the best form factor to deliver your microservices, it’s not a hard-and-fast rule that you need them to implement microservices.
3. Don’t Forget the Orchestration Layer
While the form factor for how you are going to deliver your microservices is very important, and whether you decide to go with containers or another delivery vehicle, you also need to be just as intentional about how you approach your orchestration layer.
In the container world, there used to be half a dozen systems to choose from — but in the past year, many of these systems consolidated to Kubernetes as the de-facto standard for container orchestration.
On the serverless realm, though, there are more options — in the form of different Functions-as-a-Service (FaaS) solutions, powered by different technologies.
In fact, if you choose to go with serverless functions, then you can bypass much of the perceived complexity of using and running containers and the steep learning curve associated with being able to run Kubernetes for production workloads. With functions, developers just focus on writing the code of the application, without worrying about the infrastructure “plumbing”, capacity planning, and management. The functions are triggered off events, and the data center or cloud provider automatically spins up and manages the container resources required to run them, reliably. Event-based functions are a great way to further decrease coupling, and by using a FaaS or serverless framework that supports functions directly you can greatly simplify your operations as well — for orchestrating and managing containers.
You can think of functions as very granular microservices, in a way. Multiple functions can then be composed together and optionally used in conjunction with a microservices application to perform business functionality.
4. Triggering Mechanisms
The next thing to consider is triggering mechanisms. Typically, microservices will execute in response to an event. You need to consider whether the service will be activated in response to REST API call, for example, or HTTP requests, or a message bus. Are there specific types of events such as a new file being added to an object store such as S3, that need to trigger the service? Depending on the type of event used to trigger the service, you may determine additional tooling and additional services that you need for the microservice.
5. Logging and Tracing
Since the events that trigger the microservices are short-lived, a certain application functionality can be comprised of dozens of separate microservices or serverless functions, and containers infrastructure itself is immutable and transient — these all have significant implications on your logging. You should ask yourself questions like, “How will the logs produced by our microservices be collected?” “How will they be aggregated?,” and “How will Ops teams, Support teams, and developers troubleshoot microservices when they don’t behave as expected?” Fortunately, there’s an entire industry of tools and best practices for logging, some are specifically designed for microservices.
6. Monitoring, Metrics Collection, and Performance Across Microservices
The way services are monitored and how metrics are collected and processed has changed quite a bit over the past year or so. For example, the Prometheus tool has emerged as a preferred way of aggregating all this data and then ingesting it and making it available for queries. When you’re able to do that at scale, it gives you a way that you can understand how the performance of one microservice relates to others and how they correlate. Distributed systems are difficult and mandate strong system comprehension. Getting a holistic view of your microservices across your entire development pipeline helps your developers diagnose performance issues as well as identify opportunities for increased efficiencies.
7. Data Services
Microservices and especially serverless functions tend to be stateless, but most business applications need persistent state. Consequently, most microservices read and write data stored in a separate data service, such as a database, message broker, cache, or key-value store. When choosing a platform for running microservices, it is important to consider the platform’s support for running data services. Platforms that run natively on Kubernetes are a good choice because virtually all popular open-source data services (e.g. MySQL, Redis, Memcache, Cassandra, Kafka) have been ported to Kubernetes, and Kubernetes itself provides rich APIs and mechanisms (e.g. StatefulSets) for satisfying the needs of stateful services.
8. Accelerate Time-to-Value and Simplify Ops with Serverless
As we’ve mentioned earlier, Kubernetes is still a mainstay for microservices delivery, but there are other options to discuss. Over the past year, people started realizing that a migration to microservices and containers actually involves a lot of complexity. There’s just so many new patterns and new tooling to learn that people started realizing that this stuff is not as easy and simple and immediately rewarding as expected. At the same time, serverless and the Functions-as-a-Service (FaaS) movement has also strengthened over the past year as it’s been popularized by AWS Lambda.
As this realization is taking place with serverless and functions, folks are understanding they can move into microservice-like patterns and design philosophy, but without the complexity of managing Kubernetes. This represents a fundamental shift in how people view the concept of what’s the best way to go to microservices – and it’s not just Kubernetes.
Overall, microservices are a fantastic strategy to take monolithic applications and make them more malleable and easier to fit into an agile development lifecycle. By keeping the eight concepts and tips above in mind as you move forward, you can leverage microservices to streamline your organization’s software releases and improve IT utilization, as well.