How Microservices Have Changed and Why They Matter

3 months ago, By

The concept of microservices is fueled by the need to develop apps faster, be more resilient and offer a great experience for the customer. It’s a concept equated with scaled-out, automated systems that run software on simple, commodity infrastructure. It’s the economic efficiencies that containers provide that will make microservices such a major theme in 2016.

The need for fast application development affects the entire organization and how it views the way its business has historically been organized. The new practices that come with microservices means the need for small teams that work iteratively in a manner that is unfamiliar to companies that work in a top-down fashion. This means sweeping changes to how businesses function.

Now we have the container ecosystem emerging as a core theme for a new thinking about application architectures and microservices.

There are some basic tenets to consider about microservices, noted Battery Ventures Technology Fellow Adrian Cockcroft. First, it’s now less expensive to build software, and containers have made it even more affordable. Docker is on everyone’s roadmap —  from software vendors to end users, all trying to figure out how to use containers — because they can accelerate software delivery. But it also means that the systems need to be instrumented at the application level, which means different requirements for developing, deploying and managing applications.

Cockcrofts-Cartoon

Adrian Cockcroft’s microservices talk at OOP Software Architectures conference, as rendered into cartoon form by Remarker.

For example, monitoring is more critical than ever for companies dealing with a growing scope of services and stacks. To solve problems, companies have to analyze the data logs — logs that are likely stretched across potentially ephemeral nodes and across multiple services. This need to have granular monitoring and better tooling helps practitioners better grasp how these building blocks are affecting the potential dozens of microservices that the application depends upon.

So what works? It starts with the organization and the API: A microservices-based product team and a separate backend-based platform team with an API between them, where the API calls are made and the infrastructure responds consistently and accordingly.

Microservices is defined as a loosely-coupled, service-oriented architecture with bounded context. It allows updates without needing to understand how everything else works. Services are built across organizations, and ownership stays in one place. Microservices architecture serves more as point-to-point calls between systems. You must have flexible message formats; irrespective of the versions, everything still works. That means when building a microservices architecture, you need some tooling to configure, discover, route traffic and observe and build systems.

IBM’s Andrew Hately, distinguished engineer and chief technology officer for IBM Cloud Labs, offers the context that fifteen years ago, people might check their bank balance once a week. In time, the Internet allowed people to check their balances, and taking that accessibility further, smartphones drove, perhaps, the most change. Today, people can get instant access to every spend on their accounts. That speed and immediacy means that businesses have to respond with services that are developed on the same scale that the social networks and search companies developed their services on over the past five to ten years.

Businesses have to deal with a constant interaction between their employees, customers, systems, and all possible combinations imaginable — fully connected and available all the time, Hately said. That means a reinvention of business processes that require everything to be connected. If you do not experiment, and do not have a way to quickly get features out, then revenues will suffer and you will be irrelevant.

“Instrumentation is critical,” Hately said.

Code is not supported over hundreds of sites, Hately said. The feedback comes in and consumers use that in the next set of test cases. This rigorous development process provides a way to work as a company. It is also a way to think about microservices. It is the ops side of DevOps that will do this. If you have a small piece of code and define metrics for it, you can microsegment it down to what is succeeding and what is failing.

Building off the feedback and success of consumers as well as their own internal teams, IBM combined best practices from agile, DevOps, lean, and other iterative processes to create an enterprise methodology called the IBM Bluemix Garage Method. The IBM Bluemix Garage Method combines the reliability and testability of enterprise solutions with the latest open community best practices about quality at scale, making innovation repeatable, creating continuous delivery pipelines, and deployment on cloud platforms. It’s a valuable, open resource for anyone to improve their DevOps skills for individuals, teams and entire organizations, all with governance-compliant management and monitoring abilities.

Contracts Around Software

The first generation of container management platforms are supporting these accelerated development processes.

In Docker Compose, the tooling is facilitated by microservices, said Scott Johnston, senior vice president of product at Docker, Inc. The YAML file acts as a manifest to describe the different components. Compose allows developers to describe multi-container apps in an abstract fashion. It can describe the web container, database container, load balancer and the logical connections between them. It does not require networking or storage implementation.

Microservices are a contract around software, said Engine Yard’s Matt Butcher. Some would argue that they are service-oriented architecture (SOA) done correctly. Developers want usefulness, feature richness and elegance. It returns software development to its Unix roots of doing one thing very well. The output of a command is arbitrary with Unix. Microservices is more contractual in that it shows how to do one thing very well, but also in how it interacts with an environment. If it works well, it is similar to what can be done with a good Unix shell script.

For example, the Kubernetes manifest file format serves as a contract. The manifest provides the details about the resources needed, the volume definitions, storage needs, etc. That serves as a powerful DevOps-style contract. It tells the developer and the operations professional what to expect. It’s not this medieval style of developer and operations relationship that forces the developer to throw the code over the wall.

A manifest may contain metadata about an application, plus descriptive parameters about its specific version, and potentially multiple manifests. This may be an instance, a pod manifest, a replication controller or a service definition — and the known resource locations for constituent files. Arbitrary labels may be defined for components contained in a chart.

“Application developers have a hard enough life as it is,” Butcher said. “And then there’s the quintessential problem, that I call ‘throwing it over the wall,’ where you have the DevOps people who are responsible for running all of this stuff in production, and you have the developers who are responsible for building it, and there’s always this handoff process that, all too often, becomes throwing something over the wall.”

When developers build containers, Butcher said, there’s a certain assurance level — provided by the abstraction layer — that those containers will run much the same way in the production phase as in the development phase. This already alleviates much of the headaches among DevOps professionals, who understand the basic instrumentation of containers. Containerization already provided this assurance level, but products like Helm, a new service from Engine Yard, could help to formalize this relationship further, by presenting it from team to team as a kind of contract — one that isn’t “thrown over the wall,” but instead blows right through it.
From VMs and Monoliths to Containers to Microservices

Containers provide the foundation for cloud-native architectures and symbolize a new form of application architecture compared to what virtualization has traditionally offered, said Bryan Cantrill, chief technical officer at Joyent. Hardware-based virtualization, or traditional VMs, served a time when computing was done on much larger machines. VMs provided a way for the operations teams to manage large monolithic applications that were “morbidly obese,” as Cantrill said, and hardware defined enterprise architectures. The virtual machine sat on top of the substrate, carrying the load of the operating system. Containers, however, have created a new and more agile abstraction.

“The app went on a crash diet,” said Cantrill.

Today, the complexity comes with moving from VMs and monoliths to containers and microservices. Companies struggle with how to make the shift as it requires a different thinking about application architectures, the infrastructure and the overall organization itself.

The objective of Joyent’s open source Triton service is to simplify and accelerate a company’s transition to containers and microservices, Cantrill said. It allows developers to simplify architectures. You provision only containers and never provision a virtual machine. You are able to take the cookbook for microservices and deploy it in seconds, because you don’t have to do things such as network configuration.

Cantrill said Joyent is a fan of Docker Compose, as it can talk to a single Docker Engine — a  Docker remote endpoint implemented by Triton, which virtualizes the entire data center. It allows quick and easy spin up of a full, resilient operating service. “This is the big trend,” Cantrill said.

VMware Chief Technical Officer Kit Colbert looks at the market from a perspective of how to move along the container journey. VMware has been focused on the operations space. It is now developing an approach to meet the new emergence of developers and their needs, but as an infrastructure provider.

For VMware, the company sees itself as an infrastructure provider, not an application-centric, architecturally-oriented company. Colbert sees customers interested in Cloud Foundry, but others that want a DIY approach. VMware is seeking to support application technologies with vSphere Integrated Containers (VIC) and Photon platform.

To accommodate customers using containers, vSphere Integrated Containers (VIC) makes containerized workloads first-class citizens on vSphere. VIC fits on the run side of the development process, and applies one of the most valuable aspects of virtualization to containers: flexible, dynamic resource boundaries. With virtualization, VMware turned commodity hardware into simple, fungible assets. Likewise, by applying a Docker endpoint within a virtual machine, vSphere Integrated Containers create virtual container hosts with completely dynamic boundaries. The result is an infrastructure supportive of both traditional and microservices-based applications, with accessibility to both IT and developers.

By contrast, VMware’s Photon Platform is intended specifically for cloud-native applications. Comprised of a minimal hypervisor and control plane, Photon Platform is focused on providing speed and scale for microservices. Photon Platform has also been designed for developer ease of use via APIs, giving them a self-service platform with which to provision applications and, ultimately, speed deployment.

From VMware’s perspective, operations teams are also pushing to make deployment faster. It’s now more about the digital experience, or how the software is more functional, than anything else. It’s comparable to how we view the apps we use on our smartphones. The provider may be known for the great sound of the speakers, but is the app for the service functional?

“Can I rely on it?” Colbert asked. Companies have to figure out how to build apps in order to serve the customer, who is continually seeking out the quality app. Companies need to figure that out in order to move faster. For many customers who have built-out, virtualized infrastructure, they are looking to meet organizational challenges with the advent of this faster application development process.

Development in the Time of Microservices

Software development is iterative and requires continual feedback loops to work. This is what tools such as the IBM Bluemix Garage Method  offer. But most organizations work according to a model that is different than the way the developer works. Developers do not work in the same manner as people in sales, marketing, finance, etc., who work according to a plan, a schedule. In software development, the process is much more iterative, and not top-down.

“I don’t know what to call this but the real world and software world impedance mismatch,’” said Pivotal Principal Technologist Michael Coté. Coté argues that figuring out how software development works can seem paradoxical, but it does prevent people from trying to understand how everything works in one giant machine and according to one document. By following the principles of software development, organizations are allowed to find their way, as opposed to staying rigidly attached to one plan.

There is no one way of doing microservices, Coté said. With microservices you get runtime and architectural resiliency. Microservices build upon simple principles to build really complex things. The simpler you can create concepts, the greater the complexity of things you can make.

But what happens when the complexity shifts to some place else?  With Pivotal, the platform manages the complexity. It takes away the choices so the customer does not have to think about the networking, operating systems and such. It allows the customer to put the complexity at the top of the application stack where they can better differentiate their offering for the end-user.

“We’re seeing another renaissance moment in the technology industry,” said Hately.

Likewise,  IBM Bluemix Garage Method aims to abstract the complexity away from the developer, with the aim of making them efficient and let them better enjoy the jobs they are actually there to do. All of these efforts are adding up to big changes in the enterprise, both on the technical and cultural levels.

Docker, IBM, Joyent and VMware are sponsors of The New Stack.

Feature image: Close-up of Remarker’s cartoon of Adrian Cockcroft’s talk.

This post is part of a larger story we're telling about the state of the container ecosystem

Get the Full Story in the eBook

Get the Full Story in the eBook

View / Add Comments