TNS
VOXPOP
Will real-time data processing replace batch processing?
At Confluent's user conference, Kafka co-creator Jay Kreps argued that stream processing would eventually supplant traditional methods of batch processing altogether.
Absolutely: Businesses operate in real-time and are looking to move their IT systems to real-time capabilities.
0%
Eventually: Enterprises will adopt technology slowly, so batch processing will be around for several more years.
0%
No way: Stream processing is a niche, and there will always be cases where batch processing is the only option.
0%

Microservices, APIs, and Innovation: Empowering Digital Transformation at Speed

Apr 11th, 2018 9:00am by
Featued image for: Microservices, APIs, and Innovation: Empowering Digital Transformation at Speed

Mike Amundsen, Lead API Architect at API Academy
An internationally known author and speaker, Mike Amundsen travels the world consulting and talking about network architecture, Web development, and other subjects. As Director of Architecture for the API Academy, he works with companies to provide insight on how best to capitalize on the opportunities APIs present to both consumers and the enterprise.

As a member of the API Academy, I have the opportunity to travel around the world meeting amazing people working on fantastic projects in all sorts of companies — from new startups to mature global enterprises. Incredibly, no matter where I am, no matter who I am talking to, many of the same ideas, practices, and experiences come up. And three of them that I consistently hear about are microservices, APIs and a culture of innovation. All are employed in the name of advancing the notion of digital transformation of an organization.

In this article series, I will share what the API Academy is learning about these powerful trends and some of the techniques companies are using to make digital transformation more than a buzzword. For this article, I’ll focus on what companies say microservices means for them and how they are applying this idea to transform service provisioning inside the organization. Finally, I’ll call out three things that every company can do — starting today — to help make progress on the path toward empowering teams to build great services quickly and safely.

Microservices as Toolmaking

Many companies are working on ways to improve both the speed and safety of the internal services they build and deploy for the organization. The latest way to talk about this work is to call it “microservices.” No matter where I go people say they are “doing microservices.” What that means for each company, however, can vary.

Some are using small, lightweight services to improve their “time-to-market” strategy — they want to get services up and running sooner. Others say their primary goal is to re-engineer their existing system and to reduce technical debt. Several, however, tell us what they are trying to do is build a kind of application-level “infrastructure” — a toolkit of services — that can be used as building blocks to assemble into solutions that can be quickly deployed as products to meet specific needs and solve particular challenges. In short, they want to add not just speed but also agility — the ability to modify trajectories easily — to their IT offerings.

General Principles of Microservices

I’ve come to call this work “toolmaking.” And the general principles I hear from companies as I ask them about this work all look somewhat similar:

  • Build services that do one thing well.
  • Assume the output of one service will be the input of some other, as yet unknown, service.
  • Design and build software to be tried early, ideally within weeks.
  • Use tools to lighten programming tasks.

These are often called properties of characteristics of microservices. However, if you’ve been around long enough, you probably also recognize that this list is almost identical to one published in 1978 by Bill McIlroy in the Bell Systems Technical Journal. This has become known as part of the Unix Philosophy.

It turns out the kinds of things we want to do at a company-wide scale, in some cases across the internet itself, are the same things the authors of Unix (and its most popular descendant, Linux) were applying to create agile software for a single machine. The scale is larger, but the principles are the same. And that’s good news because we have over 40 years of experience to fall back on when we work to apply these principles to our own organizations.

A Useful Definition of Microservices

Before getting too excited about Unix principles and how they can be applied today, it makes sense to step back a bit and talk about a useful definition of the term. There are lots of definitions out there (including the one I and my co-authors provided in the 2016 O’Reilly book “Microservice Architecture”). For this series, I’ll share one that I think captures important elements of both microservices and the DevOps environment in which they operate.

“Microservices are loosely-coupled components running in a highly engineered system.”

The two key elements in this definition are “loosely-coupled” and “highly engineered.”

Loosely-Coupled Components for Speed

How do you know your component is loosely-coupled? Probably the best answer I’ve gotten is that you can release that component into production without having to ask permission or arrange a meeting to coordinate with other departments within your company. This ability to release without a high degree of coordination is one of the keys to improving your time-to-release speed. That means you design the components to always be backward compatible with previous editions. You don’t require other teams to adopt the new release on your schedule and, instead, allow them to decide on their own when they are ready to upgrade to your latest release. Of course, that also means you don’t automatically remove old editions of the component as soon as you release the new one. And that leads to the second element: highly engineered.

Highly-Engineered System for Safety

The phrase “highly engineered” in this definition refers to the DevOps pattern of automating as much as possible in the build-test-release cycle. Many of the companies I talk to who are good at microservices have invested quite a bit of time and energy to getting good at DevOps, too. In fact, Thoughtworks’ Martin Fowler has famously said DevOps is a prerequisite for doing microservices.

So, engineering your build process, your testing regime and your release cadences all provide a high degree of safety to your service infrastructure. Essentially, automation means predictability and consistency — two key aspects of a resilient system. And you need a resilient, stable system in which to deploy your microservices. This ability to safely release new code into production is what makes independent releases possible.

That sounds simple, but of course, it is not. It is a big job. But there are some things you can do today to start to transform your existing coding and deployment practices into those used by companies that are good at creating an agile and stable microservice landscape.

Three Things You Can Do Today

There are lots of things you’ll need to do before your company is good at designing, building and deploying microservices. The good news is that you don’t have to learn them all before you begin. So, with that in mind, here are three things you can start working on today to help your organization start on the journey of adopting a microservice practice.

Implement Build Pipelines

One of the ways you can improve not just the safety of your releases, but also the quality of your services is to implement a Continuous Integration/Continuous Delivery (CI/CD) pipeline. That means automating the build process from the time someone on the team checks the code into source control on through the time it gets staged for release into production. For this element, I’ll focus on one small aspect of this process — including code quality and compliance checks in the build process.

Good build tools allow teams you automate to do more than just build and test — they also make it possible to automate many aspects of the code-review process. One example of this is checking for code complexity (too many lines of code per function, circular references, etc.). Another great example of automated code-review is to validate library dependencies (are you using an outdated library?) and data schemas (is this the most recent schema for the Customer Object?). Teams should be able to inject these kinds of quality/validity checks into the build process and signal warnings (the build is yellow because this library will be deprecated in three months) or build failures (you need to upgrade to Customer Object v3 before this build will pass).

Automating the build — including code-review — can greatly improve the quality and the safety of the component before it even makes it through the test cycle and into production. And that’s the next thing you can do today.

Engineered Deployments

Another thing you can do today is start automating your test-to-deployment cycle. Just like the build process, automation can improve consistency and stability of the code you release into production. And this means more than just better bench or unit testing. It means working to test “higher up the stack” and — wherever possible — testing not just the “happy path” (the way things should work) but also the “sad path” —  the common way things go wrong in production.

Automatic unit tests during the build are just the first step. You also need Acceptance or Behavior tests to confirm the change meets the business goal. And no component should be released into production without some level of integration testing. Most of the companies I’ve worked with rely on service virtualization tools to power their integration testing. That means they don’t need to touch production services to know how their new component will behave once it is released.

And service virtualization also makes it possible to manufacture “bad data” or “errant messages” to test the resilience of the component you are about to release into production. The testing of the “sad path” is an essential part of building high quality into your components without increasing the risk of independent releases of microservices.

Reduce Work in Progress (WIP)

Probably the most important thing you can start doing right now to improve the overall speed and safety of your system is to reduce the work in progress queue of your software updates. This is the elapsed time between the moment someone logs a change request (bug or feature) to the time the fix is up and running in production. Some call this the time it takes to turn “feedback into feature” or going from “idea to install.” Changing this value can have more impact than just about anything else you can do.

Many companies I work with — especially the large ones — have a rather rigorous release schedule. Some have formal production releases as few as four times per year. This means, you only have four chances in 2018 to “get it right” when it comes to modifying your system. That also means every release is a high stakes gamble.

Usually, the release packages are quite large, too. You might have as many as 150 changes in a quarterly release. That means there is more than 10,000 possible things that could go wrong — just within that single release! And failure can be very costly. Backing out one of these changes can be incredibly disruptive, too.

However, by reducing your release cycle to something like two weeks, you can greatly reduce the risk and improve your chances of success. For example, a bi-weekly release might only have five or 10 changes in it. That reduces the odds of bugs within a release and improves the odds of finding any release conflicts quickly. Bi-weekly release even mean backing out a release is less disruptive since you’ll have another chance to get it right in just another week or so.

Tom and Mary Poppendieck, the authors of “Implementing Lean Software Development” (2006, Addison-Wesley) sum it up nicely: “How long would it take your organization to deploy a change that involved just one line of code? Do you do this on a repeatable, reliable basis?”

It may be counterintuitive, but releasing more often reduces the risk and improves the likelihood of success of each subsequent release. This adds up to increased confidence in your teams, better understanding of the build process, and more trust in the IT infrastructure you are building.

That’s Services, What About APIs?

Adopting microservice patterns for designing and building your components is one of the first things you can do to safely speed your company’s digital transformation. But this is just the start. Once you have stable, reliable services up and running in your organization, you need to make them accessible in a way that is just as agile and reliable between teams and even outside your organizations to partners and other important service consumers. That means you need APIs. So, stay tuned as that’s the subject of the next installment in this series.

CA Technologies sponsored this post.

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.