Modal Title
Cloud Services / Containers / Kubernetes / Microservices / Serverless

Lightning Fast Container Provisioning with Microsoft’s Azure Container Instances

Aug 18th, 2017 2:00am by
Featued image for: Lightning Fast Container Provisioning with Microsoft’s Azure Container Instances

A couple of weeks ago Microsoft previewed Azure Container Instances (ACI), a serverless container environment to run containers without the need to manage virtual machines. While there have been some debates on the association of the term “serverless” with ACI, it is justified through its single command-based ultra-fast provisioning of containers.

Serverless environments typically have three attributes:

  1. Per-second billing based on the execution time
  2. Transparent resource provisioning through infrastructure abstraction
  3. Event-driven invocation

With the exception of event-driven invocation, ACI meets the first two attributes. The third attribute is expected in the near feature.

Lightning Fast Container Provisioning

Almost all the container environments in the public cloud require two steps to get started: provision the host VM and then run the container based on the specified image. ACI reduces this to one step, bypassing the provisioning of the host VM.

The following two commands create a Nginx container and exposes it on the public Internet.



The above command involves creation of the VM, pulling the container image from Docker Hub, running and exposing it through the public IP address.

Based on the use case of exposing Nginx Web Server, I benchmarked the startup time of launching it three different environments — Docker on Mac, Azure Container Instances, and Azure VMs. Both ACI and Azure VM are launched in US West region.

The local Docker environment running on Mac pulled the Nginx image from Docker Hub and ran it within 20 seconds. This timing is based on the bandwidth and the response from Docker Hub which occasionally experiences latency.

Azure VM is based on Bitnami’s pre-configured Nginx image. I used that to bring parity to the benchmark. We avoid the process of installation and configuration of Nginx. Not surprisingly, it took roughly 5 minutes from the time of sending the request to Azure API and the web server responding to the curl request. Almost 30 percent of the time was spent in allocating the public IP address and assigning it to the VM.

Finally, ACI instance took 45 seconds — which is only twice the time of Docker on Mac — to provision the container and exposing it through the public IP. The key thing to notice is the 45-second window includes everything from provisioning and assigning the public IP. This is by far the fastest container provisioning seen on any public cloud.

If you are interested in running the benchmark yourself, the scripts are available on Github.

Microsoft has done many optimizations to the ACI VMs. Since they are purpose-built to run containers, with no SSH access to the host OS, the Azure Compute team has turned all the knobs to get the best boot up times. I expect to see some of these learnings and optimizations coming to Azure IaaS, which will dramatically reduce the VM startup time. Having experienced the original Web Role and Worker Role provisioning in the early days of Windows Azure, I must say that ACI boot times are lightning fast. Azure has certainly come a long way.

The Kubernetes Connection

Azure Container Instances are meant for different use cases than running containers in traditional orchestration engines. ACI is designed as an augmented Functions as a Service (FaaS) platform. It falls somewhere in between Azure’s serverless environment, Azure Functions which is a FaaS platform, and the Containers as a Service (CaaS) offering, Azure Container Services (ACS).

Unlike CaaS, ACI lacks the control plane to orchestrate multiple container instances. But that’s a deliberate decision by Microsoft to keep the use cases for ACI separate from ACS. The message from Microsoft is clear — if you want to run microservices-based workloads composed of multiple containers, go to ACS, and if you need one-time execution of specialized containers that have code and configuration, use ACI.

Interestingly, ACI is modeled around the Kubernetes Pod, which is designed to run multiple stateless containers that share the same context including the Linux namespaces, cgroups, and networking stack. All the containers running in a Kubernetes Pod or ACI communicate with each other using standard inter-process communications.

With this design philosophy and motivation, Azure Compute team has built a bridge between Kubernetes and ACI called the ACI Connector for Kubernetes. The connector mimics a kubelet — the agent that runs on every Node within a Kubernetes cluster — by registering ACI as a Node with unlimited capacity and dispatching the creation of pods as container groups in Azure Container Instances.

This is one of the best experiments that extend Kubernetes functionality to deal with traditional virtualization infrastructure.

Technically, the ACI infrastructure does what a Node in the open source Kubernetes container orchestration engine does — own the Pod lifecycle. Extending this functionality, the ACI connector registers ACI as a virtual Node. When a Pod definition has an explicit toleration for the aci-connector Node, the scheduler delegates the task to ACI, which takes over the provisioning and scheduling job. The ACI connector updates the Kubernetes Master like any other Node in the cluster.

This powerful functionality enables developers and ops to use Kubectl to control ACI instances.

Here is a sneak peek of the process involved in integrating ACI with Kubernetes.

After installing the Connector, ACI starts showing up as one of the Nodes in the Kubernetes cluster. Notice the subtle difference in the version reported by ACI.

When we launch a Nginx Pod with the attribute nodeName: aci-connector, It gets scheduled like a normal Pod, but within the ACI environment. From the standard output of kubectl get pods, we cannot make out that this is not a regular Pod.

However, when we inspect it, it reveals some interesting facts. The below screenshot shows the output of kubectl describes pod command:

Notice the IP address of the Pod. This is different from the regular IP address assigned by the Kubernetes networking stack. It is the public IP address assigned by Azure to the ACI instance. This becomes evident when we inspect the ACI instance with Azure CLI.

Finally, without exposing a service to access the Pod, we can directly access the public IP address of the ACI instance. Let’s use cURL to send a GET request to Nginx.

The output confirms that we are able to access the ACI directly. We can also take the route of Kubernetes Service by creating a NodePort that can route the traffic to ACI.

Next Friday, I’ll post a step-by-step guide to deploying a multi-cloud application by integrating ACI with Google Container Engine. Stay tuned!

Feature image by Romain Peli on Unsplash.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.