From the beginning of the container revolution two things have become clear: First, the decoupling of the layers in the technology stack are producing a clean, principled layering of concepts with clear contracts, ownership and responsibility. Second, the introduction of these layers has enabled developers to focus their attention exclusively on the thing that matters to them — the application.
In fairness, this has happened before, and the first generation of Platforms as a Service (PaaS) was squarely aimed at enabling developers to adopt “serverless” architectures. The trouble was, that as is the case in many first wave products, too many overlapping concepts were mixed into a single monolithic product. In the case of most first-generation PaaS, developer experience, serverless and pricing model (request-based) were all mixed together in an inseparable monolith. Thus a user who might have wanted to adopt serverless, but perhaps not the developer experience (e.g. a specific programming language) or who wanted a more cost-efficient pricing model for large applications, was forced to give up serverless computing also.
The development of containers changed all of that, de-coupling developer experience from serverless runtimes. It is no surprise then that the past year has seen the development of serverless container infrastructure. Last July Azure released Azure Container Instances, the first serverless container offering in a major public cloud, though in fairness the folks at hyper.sh were already in the market. Seeing significant user interest in serverless infrastructure other public clouds followed Azure’s lead, with Fargate being announced six months later at re:Invent 2017, and I believe it’s only a matter of time before serverless container infrastructure is available in all public clouds.
As we move forward, it’s becoming increasingly clear (to me at least) that the future will be containerized and those containers will run on serverless infrastructure.
In this context, then, the obvious question is: “What becomes of orchestration in this serverless future?”
Kubernetes is a technology developed to provide a serverless-experience of running containers. But the truth is that at the low level, the Kubernetes architecture itself is deeply aware individual machines, and components from the scheduler to the controller manager assume that the containers in Kubernetes are living on machines that are visible to Kubernetes.
Serverless container infrastructure like Azure Container Instances is raw infrastructure. While it is a great way to easily run a few containers, building complicated systems requires the development of an orchestrator to introduce higher level concepts like Services, Deployments, Secrets, etc.
For these serverless platforms, it might have been tempting to develop an entirely new orchestrator, but the truth is that the world is consolidating around the Kubernetes orchestration API, and the value of seamless integration with existing Kubernetes tooling is very attractive. Furthermore, for the foreseeable future, I anticipate that most people’s Kubernetes clusters will be a hybrid between dedicated machines and serverless container infrastructure. The dedicated machines will be used for steady-state services with relatively static usage, or specialized dedicated hardware like FPGAs or GPUs, while serverless containers will be used for bursty or transient workloads.
Virtual Kubelet Marries Kubernetes and Serverless Containers
The interesting question that the Kubernetes community faces is how to integrate serverless container infrastructure with higher level Kubernetes concepts. Recently, the development of the open source virtual kubelet project has taken a lead in advancing this discussion within both the Kubernetes node and scheduling special-interest groups (SIGs).
The virtual kubelet project at its core is an effort to bridge the gaps between serverless containers and the Kubernetes API. As you might be able to tell from its name, the virtual kubelet is an alternate implementation of the Kubernetes kubelet daemon which projects a virtual node into a Kubernetes cluster. This virtual node represents the serverless container infrastructure making the Kubernetes scheduler aware of the fact that it can schedule containers onto the serverless container APIs.
When the virtual kubelet starts up, it registers itself with the Kubernetes API server, and immediately starts the heart-beating protocol with the Kubernetes API server so that the virtual node that it adds to Kubernetes appears to be healthy. Initially there is a standard Kubernetes cluster with three actual nodes present in the cluster. We then start running the virtual kubelet as a container within this cluster, and a fourth node is added to the cluster. This fourth node is the virtual node, representing the serverless container infrastructure. Of course, this node is actually a fairly special node, since it represents an infinite capacity for running containers on the serverless infrastructure like Azure Container Instances.
Given the differences in pricing and characteristics between containers running on serverless infrastructure and containers running on machines in Kubernetes, the virtual kubelet requires that a user must explicitly opt-in to running containers on the new virtual node. To achieve this, the virtual kubelet uses the Kubernetes notion of taints and tolerations. When it is added, the virtual node is marked with a Kubernetes taint which prevents arbitrary pods from being scheduled onto the virtual node. Only if a pod indicates that it is willing to tolerate this serverless taint, is it considered for scheduling onto the virtual node.
Once the pod has been scheduled onto the serverless virtual node, the virtual kubelet notices this and goes about actually creating the container in the serverless infrastructure. Once the Pod has been successfully created in the serverless infrastructure, the virtual kubelet is also responsible for reporting health and status information back to the Kubernetes API server so that all of the APIs and tooling work as expected.
This marriage of Kubernetes and serverless container infrastructure has a variety of real-world uses cases for batch or bursty workloads. For example, a customer who is doing image processing can rapidly spin up a large number of containers to handle a recent upload of images to shared storage, within seconds they can go from no infrastructure, to hundreds of containers processing images, and when this processing is done, they immediately go back to paying nothing for the capacity. This is in stark contrast to a Kubernetes cluster running on top of virtual machines, where there is a constant cost to operate the machines, regardless of if they are in use or not. At the same time, the actual orchestration of this image processing can be achieved using standard Kubernetes concepts like the Job object which can schedule all of these image processing containers.
Making Kubernetes Compatible with Serverless Containers
It’s been really exciting to see the virtual kubelet project grow and gain momentum across the cloud industry, with numerous different partners from startups to public clouds joining in and contributing to the vision of marrying serverless containers with Kubernetes.
Of course, it hasn’t all been easy sailing. As we have explored this integration, it’s become clear that there are significant challenges and open questions in what it means to align Kubernetes with serverless container infrastructure. While Kubernetes provides its users with a container-oriented API, when you look at the details of how this API is implemented it is clearly built on top of a notion that there are machines underlying these containers. Of course, with serverless container infrastructure like Azure Container Instances, these machines no longer exist, and this causes some conflict with the existing Kubernetes infrastructure.
To give a very simple example of one such issue, one of the things we initially noticed when deploying the virtual kubelet was that Kubernetes Services with external load balancers stopped operating correctly in clusters where the virtual kubelet was deployed. When we examined the situation, the cause became readily apparent. The Kubernetes controller manager which is responsible for creating and maintaining the cloud load balancers was attempting to register the virtual node with the cloud load balancer. However, this node didn’t really exist, and thus couldn’t be added to the load balancer, which caused errors which prevented the load balancer from being created. With Kubernetes 1.9, we added flags so that a node can be explicitly blocked from a load balancer created by Kubernetes and thus we were able to resolve this particular issue, but the general flavor gives you an idea of how wedded some pieces of Kubernetes are to the idea that there are actual machines underlying every container.
A more significant issue comes up when you consider the Kubernetes scheduler. The Kubernetes scheduler is built to believe that each individual node is a failure domain, and thus spreading containers across multiple different nodes is a good thing. When each node is a physical or virtual machine this is a good thing, since each node can fail or panic destroying all of the containers on that node. However, with the virtual kubelet, this is no longer true. The serverless container infrastructure itself is fault tolerant and built on top of many different machines. Thus while it may not be safe to schedule multiple containers from the same application onto a physical or virtual machine, it is quite safe to schedule multiple containers from the same application onto a single serverless virtual node. With serverless containers and the virtual kubelet, the node is no longer a unit of failure. This dissonance and many similar scheduling issues are still very open questions that need to be resolved.
The Future of Kubernetes and Serverless
Kubernetes was built to give developers a clean, application-oriented API that enabled them to forget about the details of machines and machine management. But the truth is that under that API surface, the machines were still there. The development of serverless container infrastructure enables people to begin to forget about the machines entirely, but the successful use of serverless containers for larger scale applications requires the development of an orchestrator. Consequently, the integration of the Kubernetes orchestration layer and serverless container infrastructure is crucial to the future success of both Kubernetes and the serverless infrastructure.
As we move into the future, I’m fully convinced that future Kubernetes clusters will contain a mix of containers running on dedicated machines as well as bursting into serverless infrastructure. But while the future destination is clear in my mind, the path and the details of how we get there are still to be determined. I’m really excited to be having this discussion out in the open with the Kubernetes community. If you ‘re interested in participating, please join us on the virtual kubelet github project, or within the mailing-lists or meetings for SIG-Node and SIG-Scheduling. I’m really excited to build this new generation of container orchestration together. Here’s to our containerized, and serverless future!
Brendan Burns and thousands of other attendees will be discussing a variety of Kubernetes, Serverless and other open source cloud-native topics while attending KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.
This post was contributed by Microsoft on behalf of KubeCon + CloudNativeCon Europe, a sponsor of The New Stack.
Feature image via Pixabay.