When compared to the competition of other enterprise infrastructure providers, Microsoft was in a delicate position to respond to the container challenge. It already had a lot of vested interests in systems impacted by the growing use of containers, including an operating system (Windows Server), a hypervisor (Hyper-V), private cloud offering (Azure Stack), and public cloud (Azure).
Containers come across as both a threat and an opportunity to platform companies. Every player that’s in the business of delivering infrastructure responded differently to the containerization wave. While VMware took longer than the competition, Google, Microsoft, and Red Hat moved fast in embracing this new wave of computing. Google squarely focused on container management by open sourcing the Kubernetes container orchestration engine and offering a managed version of it in its public cloud environment. Red Hat realized that OpenShift as a traditional PaaS was not getting enough traction. It pretty much changed everything including the branding and changing the underlying technology stack to pivot to Kubernetes.
Reflecting the new culture at the company, Microsoft was quick to strike a deal with Docker, Inc. in making it the default interface for Windows-based containers. It then worked hard to ensure that containers are an integral part of the entire stack. Some of the features such as Windows containers, Hyper-V containers, integrated Docker Engine in Windows Server 2016, Azure Container Service, Visual Studio Tools for Docker, a container-optimized Windows Nano Server, nested virtualization in Azure are signs that Microsoft is going all-out to make containerization a first-class citizen.
Microsoft’s strategic move so far has been hiring Brendan Burns, an ex-Googler who was among the founding team of Kubernetes. This hire raised a few eyebrows including those at Google building the Google Cloud Platform, Azure’s key competitor. But there was not much of a resentment because of Brendan’s association with Kubernetes, an open source project that was gaining tremendous popularity in the community. The Kubernetes community including the folks at Google was hoping to see Microsoft officially embracing Kubernetes. Microsoft did not disappoint us. Within months of Brendan’s transition to the Azure Compute team, Redmond opened the availability of Kubernetes on Azure. This almost singled out AWS, who built a proprietary version of container management platform, Amazon EC2 Container Service, based on EC2.
Brendan’s key deliverables include Windows integration with Kubernetes, which will be a big deal for Microsoft customers. They will be able to mix and match Linux and Windows workloads seamlessly managed by Kubernetes. This heterogeneous environment would run a Kubernetes cluster comprising of both Linux and Windows nodes. But the underlying networking stack of Windows makes this a tough integration. It will be interesting to see how this challenge will be tackled. Apart from bringing Kubernetes to Azure, Brendan is also busy figuring out the overall containerization strategy at Microsoft.
The latest launch — Azure Container Instances (ACI) — is a brilliant move from Microsoft, which has Brendan’s mark all over it. ACI lets developers launch “serverless containers,” without ever having to deal with virtual machines and operating systems that act as a host to containers. In just two steps, developers will be able to spin up a container in Azure. Though there are container-optimized operating systems like CoreOS, Atomic Hosts, and Windows Nano Server, they are provisioned as VMs before running containers.
In many ways, ACI is Microsoft’s answer to AWS Lambda.
With ACI, developers will never worry about the VM or the host OS running their application. This is the prime reason why Microsoft is positioning ACI as “serverless containers.” In ACI, you would never be able to SSH or RDP into the host. The workflow is simple — pull a container from the registry and run it as long as you need.
The pricing model of ACI is aligned with the serverless philosophy. Each provisioned container instance is charged at $0.0025 per month. Memory duration is calculated from the time the container begins executing until it terminates, which is charged at $0.0000125 per GB of RAM. Customers are charged $0.0000125 for every CPU Core used from the time of the container creation. Each ACI instance may have up to a maximum of 3.5GB of RAM and 4 CPU Cores. For example, if you launch an ACI instance with 1GB RAM and 1 CPU Core for 5 minutes every day for a month, your bill translates to $0.30, which is pretty affordable.
In many ways, ACI is Microsoft’s answer to AWS Lambda. Though Azure Functions is a comparable alternative to Lambda, Microsoft shipped it as a quick response to the Serverless offerings from the competition. Azure Functions is a retrofit on Azure WebJobs, a service that was created for a similar but a distinct use case. ACI is an elegant form of serverless computing because it lets developers bring code plus configuration in the form of a Docker image. Unlike Lambda, ACI is not confined to a predefined set of languages and runtimes.
Bring Your Own Container
The philosophy of bring-your-own-container has been picking up steam in the recent past. Google added Managed VMs to its PaaS through App Engine Flexible Environments. Amazon supports single and multi-container deployments in AWS Beanstalk. But Azure Container Instances bring true serverless capabilities to container-native applications. Developers can encapsulate everything from code to configuration in a Docker container image and schedule it for periodic execution. This includes running configuration management scripts, backup tasks, build automation, processing of queues, and many more tasks.
ACI is not a replacement to full-fledged container orchestration platforms like Docker Swarm, Mesosphere DC/OS, HashiCorp Nomad, and Kubernetes. If you want to run a complex microservices application that demands advanced features like persistence, service discovery, canary releases, auto-scaling, self-healing, monitoring, and logging, Azure Container Service is the best bet. Think of ACI as an augmented serverless platform with support for containers. Instead of compressing and uploading code snippets to AWS Lambda or Azure Functions, you can take advantage of the tooling and debugging support for Docker to test your code locally before running it in the cloud.
ACI is proof that Microsoft is serious about containers, and it is innovating faster than the competition. This technology will become one of the key pillars of Azure Compute platform. I am very sure that ACI will find its place in Microsoft’s edge computing platform, Azure IoT Edge. It will also eventually become available in Azure Stack as a supported compute layer.
Feature image via Pixabay.