Analysis / Contributed / Op-Ed /

Adapt or Die: The New Pattern of Software Delivery

23 Feb 2017 1:00am, by

Wayne Gibbins
Wayne is a recovering VC who has worked in software engineering, product management, marketing, operations, communications, and branding. He is currently the Chief Commercial Officer at Wercker.

Competing in today’s software world means getting working software in front of your customers as quickly as possible and keeping it running once it’s there.

This feeling is more acute in the world of startups where founders find themselves staring at ever-shortening runways, but increasingly enterprises too are feeling the heat from upstart competitors nibbling at their margins and customers with ever higher demands. This rush to ship software is not a one-time event.

Companies need to get many different versions of their software out in quick succession, often running more than one version at once in order to test their assumptions in the marketplace and learn where to focus their energies next.

In short, companies need to be highly adaptable, so their software needs to be highly adaptable too.

An enthusiastic proponent of microservices, Adrian Cockcroft, former cloud architect at Netflix and currently with Amazon Web Services, has described the need to adapt like this: “Everything basically is subservient to the need to be able to make decisions, and build things, faster than anyone else.”

But speed isn’t the only factor here, as the diagram below illustrates well:

Here we see that while the rate of change or agility is important, the resilience of software also needs to increase while all happening at an unprecedented scale. These three aspects of modern software; agility, resilience and scale can at first appear to be contradictory and indeed with traditional software practices they are.

In recent years, however, we’ve seen an explosion of new developments in software; each intended to achieve the above-stated goal. The current nexus of software development, also referred to as ‘Cloud Native Computing’ does just that and lies at the intersection of containers, microservices, Continuous Integration/Continuous Delivery/Deployment and the modern cloud.

In contrast to monolithic systems, microservice based systems can evolve quickly in response to new features requests or large reports as each microservice can be redeployed as and when a change to that sub-component is necessary.

For us at Wercker, that means it’s about getting Dockerized microservice based applications built, tested and deployed in a fast, repeatable and secure way. It means getting from code on a developer’s laptop to containers running at scale on the Kubernetes container orchestration engine. This is where we play.

In this article, we’ll take a look at the aforementioned software development practices and movements to see why each of them are helping companies to become more agile and resilient at scale.

Microservices

Microservice architectures have been seeing increasing popularity over the last few years because they aim to specifically tackle the problems of modern software development by decoupling software solutions into smaller functional pieces that are expected to fail.

By taking a closer look at how and why microservices are being used, we can see how they directly allow teams to achieve greater agility and resilience.

The agility of a software solution refers to its ability to change. Martin Fowler, an early proponent of microservices, states that this agility comes from the decoupling of the running system into “a suite of small services.”

In contrast to monolithic systems, microservice based systems can evolve quickly in response to new features requests or large reports as each microservice can be redeployed as and when a change to that sub-component is necessary. It is no longer necessary to redeploy the entire application for every update, a severely limiting factor on the agility of monolithic applications.

However, this agility initially comes at a cost for teams. In the aforementioned article, Fowler also points out that due to the distributed nature of microservice architectures, individual services “need to be designed so that they can tolerate failure of [other] services.” If teams persist through what can be a difficult on-ramp, however, they will eventually find that they have an application that is more resilient overall.

This occurs because the services in the system expect; 1) that their dependencies will be remote enabling risk mitigation to the next virtual machine, rack or continent and 2) that their dependencies will fail, forcing developers to build resilience in from the get go.

Microservices, then, allow teams to build software that is both more agile and more resilient. In the following sections, however, we’ll see how perseverance isn’t the only thing required to reach this goal. Cloud native applications benefit from new tooling as well as new software architectures.

Docker Containers

Although already being used by highly skilled engineering organizations like Netflix and Gilt Group, microservice architectures got a big leg up in 2013 when Docker Inc. released Docker. Now nearly ubiquitous, Docker wrapped existing container-based virtualization technologies in a developer friendly way and would prove the perfect replacement for virtual machines as the unit of deployment for microservices.

So how does Docker help software companies to be agile and resilient at scale?

First and foremost containers are comparatively smaller than virtual machines (VMs). By sharing the underlying host OS they can start in hundreds of milliseconds instead of minutes resulting in faster tests, faster deploys and higher overall agility for the teams using them. Their smaller size also allows teams to pack more compute onto the same amount of underlying hardware, increasing the overall scale that container based solutions can reach.

Second, Docker containers have ushered in the adoption of the immutable server pattern which, although already in use before containers came on to the scene, is now proving to be the perfect pattern when combined with microservices and CI/CD for high-speed testing and deployments.

Containers also allow teams to achieve higher levels of resilience through their portability across various infrastructures. Before containers, software companies trying to achieve higher resilience by “hedging their bets” across various clouds would have come up against the various non-interoperable VM formats available to them. A problem which, although a solvable problem with tools like Hashicorp’s Packer still added an extra layer of complexity for teams to navigate.

The “write once, run anywhere” design of Docker containers allow engineering and operations teams to spread their infrastructure across multiple cloud providers as long as the Docker daemon or a container orchestration engine is in place. We’ll touch on that in the modern cloud section below.

Continuous Integration/Continuous Delivery/Continuous Deployment

Continuous Integration (CI) and Continuous Delivery/Continuous Deployment (both interchangeable referred to at CD although subtly different) were already in use in varying degrees when Jez Humble and David Farley’s book, “Continuous Delivery,” arrived in 2010.

Deriving directly from the “working software over comprehensive documentation” tenet of the agile manifesto continuous integration first sought to make sure that every new change to a piece of software was fully tested.

Continuous integration has been a key driver behind teams adopting more thorough automated testing before shipping software. Why? This is perhaps best answered in the question first posed by Kevlin Henney:

“Why do cars have brakes?”

“So that they can go faster.”

 

Continuous integration, then, has allowed teams to increase their agility by building resilience into their software in the form of automated testing. This testing happens on many layers, but a full description is outside the scope of this article and will be covered in a follow-up posts.

Continuous delivery takes continuous integration a step further by not only testing the software but also delivering that software as ready-to-deploy artifacts. This is a vital step in complete testing as continuous delivery also allows software systems to be tested ‘in full’ in environments often referred to as staging environments before they go to production.

Continuous deployment goes further still by pushing all changes straight into production once they are ready and using techniques like canary testing and blue/green testing to check the production readiness of new changes against actual production traffic.

And it just so happens that Docker containers make the perfect ready-to-deploy artifact for a few reasons. First, since they are ‘build once, deploy anywhere’ this ensures that they can be deployed into various testing environments without hassle before being pushed through to production.

Second, thanks to their comparatively small runtime footprint, it is often possible for developers to run fuller tests on their own development machines, giving them the fastest possible feedback about new changes and as a result, higher agility.

Advanced tools in the space, such as the Wercker platform, have changed significantly since the early CI (and later CD) tools became available and this is just as well as microservice architectures rely on highly scalable and adaptable build pipelines. Wercker is not only able to handle build pipelines for each service, but can also natively deal with Docker Image artifact repositories and advanced deployment methods such as those mentioned above.

So now we have microservice based applications, bundled as Docker containers, being built, tested and deployed quickly and resiliently. The next section looks at where they are ultimately being deployed.

The Modern Cloud

So far we’ve seen how microservice architectures when combined with Docker containers and propelled forward with advanced CI/CD tools like Wercker form the blueprint of modern software engineering. But these decoupled, containerized and heavily tested workloads, of course, need to run somewhere. This is where the modern cloud enters.

Amazon kicked things off in 2006 with the Elastic Compute Cloud (EC2) service. Fast (for the time), low cost and developer friendly virtual machines were only available to those software engineers lucky enough to be working in forward thinking companies with advanced internal clouds.

Software may be eating the world, but many companies started to see that to be agile they had to concentrate on eating only the part of the world they had strong domain knowledge of. By letting Amazon do the heavy lifting on the infrastructure side, companies could concentrate on building the software features that their customers cared about instead of managing the infrastructure that their customers didn’t want to see or hear about.

In many ways this supply of cheap, quick and plentiful virtual machines allowed some companies to take advantage of the immutable server pattern that is now prevalent with Docker containers. This model was used to increase the agility of teams as they could rapidly redeploy from versioned artifact VMs, rather than having to manage the state of long-running instances with configuration management tools like Puppet which too often cause operations headaches of their own.

Of course, there are more players in the market and while our customers can deploy anywhere due to our extensible integration model we work closely with Amazon Web Services, Microsoft Azure and Google Cloud Platform. Why is that? Because they are currently the biggest and the best (Oracle recently announced it is hot on their heels so there is more market movement to come as enterprise adoption increases)

The implications of public clouds for scale and resilience are obvious;

  1. Public clouds have bigger data centers than you probably want to maintain future-proofing you for growth.
  2. Public clouds are likely better than many operations teams at keeping the lights on.

As previously mentioned, however, VMs start in minutes as opposed to seconds and VM formats are typically locked to the cloud provider in question. This means that scaling workloads in response to demand is slower, and migrating to another cloud provider all but impossible in practice.

The emergence of containers fixed this problem neatly by putting a layer in between the underlying VMs and the running applications, typically called a container orchestration engine.

Here we’re only talking about Kubernetes. Wercker supports them all but we love Kubernetes, we run our stack on Kubernetes and we think it’s the most popular and successful container scheduler out there. We’ll have a whole series of Kubernetes blog posts, tutorials and sample apps to add to our existing Kubernetes content coming soon.

By spanning multiple clouds the combination of container orchestration engines with containerized application makes “build once, run anywhere” a reality.

Summary

It’s often difficult to keep track of everything that is changing in the software world, let alone to try to guess where it is all going. At Wercker we feel that the combination of microservice architectures, Docker containers, modern CI/CD and the modern cloud have created a unique point in computing history; a true paradigm shift.

At Wercker we’re very excited to be providing the developer automation layer required, effectively the glue that pulls all these concepts together from a developer’s perspective. From the laptop to a Kubernetes cluster, our customers are embracing this new paradigm wholeheartedly, and we’re happy to be onboard with them in these exciting times for the software and cloud industries.

Wercker is a sponsor of The New Stack.

Feature image by Joshua Earle, via Unpslash.


A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.