“I think that there’s a lot of value that people have that is out in the ecosystem, that we don’t necessarily want to, or even believe that is necessary to, tie to a particular operating system,” stated Brendan Burns, Microsoft’s partner architect and one of the original lead developers of Google’ Kubernetes, speaking with The New Stack just in advance of Microsoft’s general release of Kubernetes support on its Azure Container Service last week.
Burn’s assertion would be the type of statement you’d expect to find in The New Stack from a fellow of the Linux Foundation or the Cloud Native Computing Foundation. But what is uniquely important about Microsoft’s unilateral declaration of decoupling, is discovered in layers: First, there’s who said it. Next, is where he came from to get where he is now. And then there is the context in which he made his statement was made.
“We want to just make sure that people are able to use that technology if they are also deploying their applications in a .NET or Windows OS environment,” Burns continued. “We don’t want to make it be an exclusive choice. We don’t want to make people have to decide. We want to allow people to find the place that works for them.”
A Difference without a Distinction?
Microsoft’s current policy, as stated by multiple company officials including Burns, is to enable a variety of workload deployment options simultaneously, all of which are supported by Azure as the underlying platform. It’s a way of portraying the company as the agent of choice, as well as a curious opportunity to cast it historically as always having been the “choice” company.
Yet its hiring last June of Burns away from Google, where he was a co-creator of the Kubernetes container orchestration engine, and its subsequent empowerment of Burns as an advocate of heterogeneity in the data center and Microsoft as its facilitator, lacks the telltale components of smoke and screen that characterized the company in the past.
“Before I got here, it was clear to everyone that [open source software] and Linux were going to be really important components to Azure,” Burns told The New Stack in a recent e-mail. “My experience brings to focus how you simultaneously engage in public cloud and upstream open source community and what that looks like. We have a number of vibrant open source communities — Visual Studio Code, for example. How you participate in both — building your product and participating in the OSS community — is a bit unique, and we’re scaling that out. For example, one of my colleagues is working on the Kubernetes 1.6 release to ensure it’s a high-quality release for Azure, but this work also makes Kubernetes as a whole stronger.”
There’s an objective here that’s not particularly easy to explain, even for someone at Microsoft… even for someone who helped build the container ecosystem, and then came into Microsoft. On the surface, the idea is to provide developers with more choices when developing services deployable to a public cloud — in this case, obviously, to Azure. Underneath, the strategy is to make the differences between the platforms any of the following: a) negligible; b) less than obvious; c) superfluous; d) a matter of the developer’s personal taste. In any of these situations, perhaps Microsoft can’t obtain an advantage over a competitor, but the reverse also holds true.
Burns put it to us this way: “Most customers we talk to are interested in operating in a hybrid and/or multi-cloud, world and are looking to container orchestrations. We are excited to go to open source to enable a mix of services that work for them.”
The Difference the Developer Can See
In the spring of 2015, an up-and-coming Google engineer named Brendan Burns gave one of the first demonstrations of the basic principle of spinning up a Kubernetes cluster from a Linux command line:
That demo involved spinning up a working instance of NGINX. From a Linux command line, Burns instantiated an etcd container to act as a simple storage component. Once instantiated, he delivered a docker run command which triggered the master components for the application, and another docker run command to launch the service proxy enabling communication with those components. From the developer’s perspective, it was a three-step process, but in actuality, several automated steps took place in the background, including the creation of the kubelet — the pod-side container manager working on Kubernetes’ behalf. That kubelet spawned the necessary API and scheduler containers automatically, and bound them to the pod. From there, the kubelet would be responsible for managing the health of the containers under its ward.
(Burns later clarified, saying that his demo involved the creation of standard Linux containers from Visual Studio Code, not Windows containers.)
With the communications process established, Burns could invoke simple, verb-like commands using Kubernetes API, passing them to the orchestrator using kubectl command line interface. Through that channel, Burns could tell Kubernetes to invoke and run NGINX.
Fast-forward 19 months. There were several changes to the Kubernetes API in the intervening time, but none that would render the orchestrator unrecognizable had someone been asleep that long. A partner architect with Microsoft named Brendan Burns demonstrated how to invoke and operate a Kubernetes-managed pod using resources from Azure:
The command line tool in Azure is az, so the way Burns accessed Azure Container Service is through az acs. The process of spinning up an orchestrator here is with the command az acs create, and it’s notable here that the attribute ‑‑orchestrator-type is wide open. This is where the operator may specify Kubernetes, as opposed to Swarm or Mesosphere’s Data Center Operating System, which are other orchestrators also supported by Azure. After directing Kubernetes to gather the necessary credentials for running his containers on Azure, he could then communicate with kubectl. (Microsoft’s documentation goes into further detail.)
But from there, instead of sticking with the command line, Burns jumped into the Visual Studio Code (VS Code) console. If you’re somewhat familiar with the long-standing Visual Studio suite of Windows-based development tools, forget everything you think you know — besides the qualitative similarities, VS Code is a very different world.
In this later demo, the Terminal pane gives Burns a more replete connection to Azure’s command line — more efficient than invoking az acs each and every time. But he didn’t have to pull up the Terminal pane whenever he needed to issue a command to Kubernetes on Azure.
Instead, using the search line at the top of the editor, he issued commands to kubernetes by name (not kubectl), as though this were a web browser and he were looking up a page on Google. The search line acts like a glossary here, showing exactly how Kubernetes is addressable, and letting the developer effectively browse through the options.
There’s an air of serverless-ness in using Kubernetes on Azure via VS Code. With the exception of initializing the pod at the very beginning (and this could conceivably change soon enough), the cohesion between the functions within a single console is reminiscent of how comfortable programming used to be, when the machine you were programming was twelve inches away and the tools you used were slick, robust, and well-tested. The link to Git, the main source code, the JSON deployment and configuration code, and the Dockerfile for constructing the container, are tucked away in designated compartments within the VS Code console — rather than in various windows splashed across the desktop like a paintball tournament.
This Won’t Hurt a Bit
It’s not lost on the observant viewer that Burns’ VS Code demo took place on a Linux desktop. What was glossed over, however, is this fairly important bit of information: The containers, built using Docker and orchestrated using Kubernetes, utilize Microsoft’s Windows Nano Server, not a minimized Linux. If Shakespeare is still out there looking for the proverbial rub, this is it.
“Previously — and this is true for people on Linux and Windows, honestly — the things that enabled them to build cloud-native apps were homegrown and bespoke,” Burns told The New Stack. “They would effectively build their orchestration system, but it would be tightly integrated with their applications.
“Now with containers, we have an opportunity to build an orchestration system that everyone can use. You don’t have to be the scale of a [Microsoft] Exchange team or an Xbox team to build your own orchestration layer. By building these containers, you take advantage of the technology to be cloud-native, without having to build a lot of it. That’s the transformation that’s happening, allowing these small- and medium-sized teams to focus on their application, and yet still reap the benefits of a cloud-native approach.”
This is the market that Microsoft is targeting with Azure Container Service and with orchestrator choices: not the hyperscale, cross-cloud complexes where Linux already abounds, but the smaller enterprises that la nouvelle pile has not yet touched — the territory Microsoft has not already lost to the forces of evolution.