TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
API Management / Kubernetes

Running Kubernetes in Kubernetes

Mar 15th, 2017 9:49am by
Featued image for: Running Kubernetes in Kubernetes
Feature image via Pixabay.

Sebastian Scheele
Sebastian is CEO and a co-founder of Loodse, a software company that has developed Kubermatic — a container engine for multiple Kubernetes clusters. Prior to Loodse, Sebastian worked in the enterprise sector for more than ten years. He started his career at software producer SAP, where he was involved in the development of SAP Hana, and worked as a consultant for major international customers.

There definitely is some magic around the open-source Kubernetes container orchestration engine. For quite a few years, containers have been a cool concept but broad deployment has proved to be difficult. Managing inter-container networking, persistent storage and auto-scaling for hundreds of containers manually were just not possible, and a really good container platform was not in sight.

This changed when Google released the Kubernetes project to the open source community in 2014. Immediately, networking, storage, auto-scaling, alerting, and many other standard infrastructure features for containers could now be managed automatically within Kubernetes clusters, combined with extremely low downtime and great performance. Running containers at scale and utilizing distributed infrastructures became a viable option for many companies.

Coincidently, the idea of cloud native applications brought up the pets vs. cattle discussion, where you start to consider every component of your infrastructure as a disposable part of a herd and not as an irreplaceable pet anymore. According to this new way of thinking, every component must be able to fail without an impact: servers, racks, data centers… everything. Ironically, however, many companies now treat their Kubernetes cluster like a pet and spend much time and resources on its care and well-being.

To us, this seemed very strange and not how it should be, since it contradicts the base concept of cloud native applications. Therefore, our mission was clear: We wanted Kubernetes clusters to become low-maintenance cattle: fully-managed, scalable, multitenant, and disposable at any time. Also, we wanted to have a single API for all our clusters.

If Kubernetes provides the perfect tool for running automated container cluster, why not use Kubernetes to self-host multiple Kubernetes clusters? Why not set up a Kubernetes-as-a-Service to re-shift focus from running Kubernetes to running your services again? Indeed, this is what we did and do, and we still think that it is a very good idea.

But how does running Kubernetes in Kubernetes work?

The first thing to do is to set up an outer Kubernetes cluster which runs the master components of multiple separate customer clusters. Like any other Kubernetes cluster, the master cluster consists of four master components: the API server, the etcd key value store, the scheduler, and the controller. In order to prevent downtimes, we create a high availability setup with several entities for every component.

Graph: High availability setup of the master cluster.

Then, to start the inner clusters, we create a namespace, generate certificates, tokens and SSH keys, and deploy the master components. Subsequently, we add an ingress to make the API server and etcd accessible from the outside. Finally, we install basic plugins like Heapster, kube-proxy, Kube-dns, and the dashboard.

After that, we want to know what is going on in the inner clusters, because this is where the nodes and pods are actually running.  Unfortunately, there’s no chance of that with the standard Kubernetes set-up, where you only communicate with the master cluster’s API. To complicate things further, we want to set up N clusters with N API servers and N etcd instances that all have the master cluster’s IP. As a consequence, we need to provide a service with an HTTP and a TCP Proxy that is able to transmit an encrypted request for customer API server 3 to customer API server 3 via an SSH tunnel.

Graph: Networking solution.

Summary

We showed that you can provide for multiple self-hosted and fully-managed inner clusters by running Kubernetes in Kubernetes. This allows you to easily create, update, delete, and reschedule clusters without impact on the functionality. Moreover, this architecture ensures even higher isolation and security, and it also considerably facilitates multitenancy.

Although the concept is still in its early days, we believe that Kubernetes-in-Kubernetes is an important step in escaping the pet paradigm once again. The concept empowers companies to run multiple clusters at scale and to pursue a truly cloud-native strategy where you focus solely on your applications, and not your infrastructure anymore.

For more on Kubernetes networking and related topics, come to CloudNativeCon +KubeCon Europe 2017 in Berlin, Germany March 29-30.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.