TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Containers / Kubernetes

A Closer Look at Microsoft Azure’s Managed Kubernetes Service

Dec 14th, 2017 5:00am by
Featued image for: A Closer Look at Microsoft Azure’s Managed Kubernetes Service
Feature image via Pixabay.

Microsoft’s initial version of Azure Container Service, its a Containers as a Service (CaaS), offered a choice of orchestration engines in the form of Mesosphere DC/OS, Docker Swarm, and Kubernetes. But none of them were truly managed, which meant that the customers had to maintain the environment including patching, upgradation, scaling, and managing the clusters. In many ways, Microsoft only automated the initial setup and configuration of container orchestration tools without really managing the post-deployment phase.

After a couple of years of announcing the initial version of ACS, Microsoft has joined the bandwagon of cloud providers with managed Kubernetes service. Google was the first to deliver Google Kubernetes Engine (GKE) followed by IBM with its IBM Cloud Container Service. Microsoft became the third major cloud provider to offer managed Kubernetes as a Service. It has also rebranded Azure Container Service to AKS to prominently highlight the inclusion of Kubernetes.

Apart from branding, what has changed since the initial version of ACS is the core architecture of Azure CaaS offering. In the earlier versions, customers had visibility of Kubernetes master servers and nodes. They had to mention the number of master servers included in the cluster. Once deployed, administrators could SSH into one of the master nodes to take control of the deployment. With managed Kubernetes on Azure, Microsoft no longer exposes the master servers. Since Azure manages the multi-tenant master servers, they are completely obscured from customer deployments. What customers end up seeing in their subscriptions is only the set of nodes running their containerized workloads.

Nodes in Kubernetes are not as critical as the master servers. They are mostly stateless and dynamic. Master servers are coupled with etcd servers to expose the control plane of Kubernetes. The master servers are responsible for exposing the API, making scheduling decisions, monitoring the health of the cluster, maintaining the desired configuration state, and many other critical tasks. By delegating the management and administration of master servers to Microsoft, customers would be able to focus on the workloads than maintaining Kubernetes cluster infrastructure.

Deploying the Cluster

Before we explore the architecture, let’s provision a single-node managed Kubernetes cluster in AKS. Running the following commands will result in the creation of a cluster. It is assumed that you have an active Microsoft Azure subscription.


After a few minutes, check the cluster availability with the following commands


Now, we can download the kubectl client to access the cluster.


The above screenshot confirms that the cluster is up and running.

Behind the Scenes of an AKS Cluster

Microsoft has done quite a few things to ensure that the CaaS offering is tightly integrated with Azure. Firstly, it expanded the Microsoft.ContainerService Azure Resource Manager (ARM) provider to Microsoft.ContainerService/managedClusters to support managed Kubernetes to differentiate the legacy ACS Engine with AKS. This new ARM provider exposes properties to configure Kubernetes versions, a number of worker nodes, and the cluster admin profile. In the new deployment model, customers cannot define the number of master nodes. This decision is made behind the scenes by the AKS control plane.

The above screenshot shows the only resource created by AKS. It’s reflecting the instance of Microsoft.ContainerService/managedClusters ARM provider.

The etcd cluster which acts as the single source of truth for the entire infrastructure is backed by SSD along with automated backups and high availability configuration. The secure TLS configuration is backed by Azure KeyVault. Support for RBAC is provided by Azure Active Directory.

Apart from the resource group that holds the managed cluster resource, AKS also creates another resource group to provision related assets that belong to the cluster.

It all starts with Azure Compute creating a new Availability Set which ensures that the homogenous VMs launched together can be efficiently managed as a single unit. This Availability Set is placed inside a Virtual Network (VNet) that acts as the network boundary for nodes. The VNet has a Route Table, Virtual NIC, Network Security Group (NSG) associated with it. The  NSG rules define the security policies for the VNet through fine-grained ingress and egress rules.

Since we launched a cluster with just one node, we see that there is only one VM in the pool. It comes with a standard HDD which is 30GB in size. Each node in an AKS cluster runs Ubuntu 16.04 LTS.

Once a cluster is deployed, customers would be able to treat it like any other Kubernetes deployment. The CLI is updated to enable AKS management.

When a Kubernetes service type is defined as LoadBalancer, AKS negotiates with the Azure networking stack to create a Layer 4 load balancer. A public IP address is assigned to the Load Balancer through which is the service is exposed.

A simple kubectl get svc command shows that the service is of type Load Balancer.

The public IP address of Layer 4 Load Balancer created by AKS confirms the same.

The creation of additional services with type LoadBalancer results in separate public IP addresses pointing to the same Azure load balancer. In other cloud platforms, each new Kubernetes service is exposed through a dedicated layer 4 load balancer. The model of pointing multiple public IP addresses to the same load balancers is more efficient and cost effective for customers.

The above screenshots explain how Azure manages the mapping between Kubernetes services and the Load Balancer IP.

Apart from tighter integration with other Azure building blocks, AKS makes it easy to bring persistence to Kubernetes workloads. Customers can create an Azure file share and mount it as a volume within the pod. This mechanism will add durability to stateful pods by moving the persistent layer to external managed service.

Customers can integrate container monitoring by configuring Azure Log Analytics service. The Operations Management Suite agent is deployed as a DaemonSet in the cluster. The OMS agent pod is scheduled on each node to send the logs to the analytics service. Once configured, this integration provides right insights into the Kubernetes cluster and containerized workloads.

Microsoft is expected to make AKS generally available after adding the support for Windows Containers. Customers would be able to choose between Linux nodes and Windows nodes during the creation of a cluster. Eventually, AKS may support heterogenous Kubernetes cluster configuration that can run both Linux and Windows. This model would enable customers to mix and match Linux and Windows containers deployed as a single workload. This feature could become a key differentiating factor for Microsoft Azure.

Though in public preview, AKS is an elegant platform to run containerized workloads. The Azure Compute team at Microsoft has ensured that there is the right level of integration with rest of the services.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.