Development / Kubernetes / Microservices / Contributed

Monolithic Development Practices Kill Powerful Kubernetes Benefits

2 Aug 2021 12:54pm, by

Hugh McKee
Hugh is a developer advocate at Lightbend. He has had a long career building applications that evolved slowly, that inefficiently utilized their infrastructure, and were brittle and prone to failure. His focus now is on helping other developers and architects build resilient and scalable distributed systems. Hugh frequently speaks at conferences, and is the author of Designing Reactive Systems: The Role of Actors in Distributed Architecture (O'Reilly).

To say Kubernetes has made a huge impact on development practices might be the understatement of the 21st century, as its use continues to expand at a blistering pace. In fact, a recent Kubernetes Adoption Survey conducted by Portworx found that 68% of IT professionals increased their Kubernetes use due to the pandemic. That increase is not surprising as Kubernetes allows for building and deploying new application components quickly, resulting in much faster time-to-market, particularly for more complex applications.

The granularity that Kubernetes provides (more focused microservices versus larger monolithic applications) allows for faster system evolution. In other words, new services can be introduced and existing services can be changed more quickly because the scope and the impact of introducing or changing a service has a smaller “blast area”, meaning how that change affects the overall application. It’s this loose coupling when moving from a non-Kubernetes environment to Kubernetes that provides such big advantages. However, old habits of the monolithic application world die hard and can choke out many of the powerful benefits that Kubernetes provides.

Unlearning Monolithic Habits

In monolithic thinking, for example, module A calls B and module B calls C. Everything works because they’re all running in a single sequential process. However, you’re also running these processes across the network and that means latency, which means a performance impact for the application and that’s not something anyone wants. With Kubernetes, microservices shouldn’t talk to each other via a remote synchronous request procedure — they should talk to each other asynchronously.

For example, when microservice A gets some kind of request, it’s capturing information and maybe broadcasting what it’s doing. It doesn’t care who is on the receiving end of that broadcast. Receivers are getting that information and processing it in an asynchronous flow. Building microservices that are loosely coupled means designing them from the ground up to work as autonomously as possible and all the communication between A and everything else is asynchronous.

With an event-oriented system, the customer becomes a publisher.  Anytime customer data changes, the customer service publishes it.

Unfortunately, many developers using Kubernetes are working just the opposite. A good example is how tightly coupled applications’ microservices tend to be. Developers are doing restful, JSON remote procedure calls between services and want traceability. Traceability is needed because their services are tightly coupled. However, when the services are loosely coupled, all messages will get delivered, it’s just a matter of when. This obviates the need for tight coupling.

Let’s talk about scale. Sure, tightly coupled dependency between services can all scale, but often, things bottleneck outside of the services. Most often, it’s the database. While databases are getting better at scaling, it’s a totally different dynamic. Maybe you can scale at the compute level, but at the persistence level you may have a hard ceiling. A database can only be pushed to a certain performance threshold. You’re not going to go any faster no matter how much you can scale on the Kubernetes side. So while microservices can beautifully scale in Kubernetes at the code level, you also need to be able to control external factors like data. The takeaway is not only should microservices be loosely coupled, but the data itself should also be autonomous.

Each Service Should Own Its Own Schema

When discussing why data should be autonomous, let’s consider a shopping cart application. In many cases, it’s more interesting to see why something didn’t happen versus what did happen. Why are people putting certain items into their shopping cart and then removing them? When you’re capturing every single event, like an item added or removed from a shopping cart, you’ve got a richer set of data for analysis. That’s why it’s important for developers to move from CRUD (Create Read Update Delete) to a CQRS (Command Query Responsibility Segregation) approach. CQRS is splitting apart the writing of data and the reading of data, and that’s the segregation part. The result is event-oriented types of microservices. When you start to capture your data in events, you stop doing the updates and deletes which are fundamental in CRUD. You’re no longer throwing away data, which can be a good thing.

With an event-oriented system, the customer becomes a publisher.  Anytime customer data changes, the customer service publishes it. That information is published out and other services can pick that up asynchronously and they keep their own view of the customer. It’s replicating data, but data is cheap. In fact, in the last 10 years, the cost of data has gone down at least a factor of 10. Services can just get data for their own store. Now, there’s no longer the synchronous data connection between the two, they are asynchronous. This, too, is enabled by an event-oriented type of approach. When one service goes down, all the other services keep running because they have their own view of customer data. Each service is totally self-contained and has its own data. It runs faster, you can change it faster, and you can fix it faster.

Beyond the customer data, there are all kinds of reference data as well and that’s why each service owns its own schema. Consider the shopping cart example again. Within the shopping cart service, beyond what the customer adds and removes from the cart, there’s catalog, pricing, inventory, shipping, customer information. All the information with each service is right there and there’s no dependency on other services. That’s where a lot of developers make mistakes. When they move to the cloud and Kubernetes, they’re excited to build microservices. They begin to break up their monolithic code into different microservices, but they also carry forward the monolithic databases that they’ve had for decades (they don’t break up the data). A better way to build these microservices is to take slices of that data model so that each microservice has its own view of what the data looks like and it’s entirely private.

Why? With one monolithic data model, you have to consider everybody that will be potentially impacted and you may not even be sure who that is. It requires negotiating with other teams. That slows down how nimbly you can evolve your microservices. Even more problematic, you’re asking them to make changes to their code because of these data model changes and they have zero motivation to do this. It doesn’t enhance their microservices, only yours. Conversely, by owning the scheme with your microservice, you can do whatever you want.

As you move to the cloud, Kubernetes and microservices, challenge everything that you know to date. In reality, best practices today are completely counterintuitive to how legacy monolithic applications were built. Don’t use monolithic data, use private data for microservice. Don’t depend on the data from other microservices.  Use eventing to get data from other microservices when those changes occur and keep your own private copies of what that data looks like for your own consumption and use. It’s not about the economy of data, it’s about speed and nimbleness of data. The benefits of using Kubernetes and microservices is incredible — just make sure you know how to fully wield its power!

The New Stack is a wholly owned subsidiary of Insight Partners. TNS owner Insight Partners is an investor in the following companies: MADE.

Feature image via Pixabay.

A newsletter digest of the week’s most important stories & analyses.