German company Loodse advocates using Kubernetes itself to self-host multiple Kubernetes clusters. The Hamburg-based company, launched in 2016, takes its name from a Dutch word meaning to pilot, as in a ship — in its case a container ship.
Loodse unveiled a beta of its automated orchestration system Kubermatic in February. Co-founder Sebastian Scheele outlined its technology in a TNS post, “Kubernetes in Kubernetes,” shortly thereafter.
Kubermatic is an automated master-as-a-service solution that enables you to easily set up managed Kubernetes clusters. It provides fully managed master components and nodes along with horizontal scaling for nodes and workloads.
Kubermatic is directly integrated with DigitalOcean and Amazon Web Services, as well as any provider offering Ubuntu 16.04 or greater. The company is working on support for additional cloud providers as well.
The company is not aiming to provide managed Kubernetes service along the likes of StackPoint Cloud, TeleKube, Platform9 or Stratoscale. Instead, it wants to help enterprises build their own container engine on their own infrastructure, whether that’s with cloud providers or on-premise, according to Scheele.
“We wanted to create something similar to Google Container Engine, but everywhere and [to enable customers to] run the container cluster easily, and with multiple clusters,” he said.
Cluster federation — the pooling together of multiple clusters — was among the topics Google’s Kelsey Hightower addressed recently at KubeCon Europe 2017 in Berlin, an event where Scheele and senior infrastructure architect Jason Murray also made presentations.
— Kubernauts (@kubernauts) June 1, 2017
“We want to enable customers to run their clusters as cattle,” Scheele said in an interview, referring to the oft-used pets vs. cattle analogy. “It should be easy to create a new cluster, to throw away an existing cluster, to have data persisted, to mount storage … our goal for the application developer is to make infrastructure invisible.”
Just press a button to create a cluster, define a name. You can define a location, the public cloud or data center, where the master should run. In the background, it spins up the Kubernetes master components: the API server, the etcd key value store, the scheduler, and the controller. Then it creates inner clusters managed by the master.
You can have as many clusters running as you want at any point in time, Murray said in an online meetup. You can add and remove nodes dynamically as needed. The company also is working on features to do dynamic node-scaling.
Kubermatic runs two of each of the master components for high availability and runs health checks on them in the background.
The biggest issues are with networking, load balancing and storage, Murray said in the meetup.
“We wanted to create a very consistent plane when you deal with a Kubermatic cluster, regardless of which provider you’re running your nodes on, you have the same expectation every single time and you can deploy your cluster in exactly the same way, every single time across all these providers,” he said. So they had to abstract a lot of things away to make it independent of the host and the provider.
Running seed clusters across different environments — Google Container Engine (GKE), AWS, bare metal — each presents unique challenges when it comes to data, he said.
Kubernetes’ recently released storage class, however, makes it much easier to dynamically provision volumes as you’re creating and destroying clusters, rather than having to code for that, he said.
Another problem was the Kubernetes assumption that if you run the master on AWS, for instance, the nodes will be run there, too. In the Loodse model, the master can be in one cloud, but the nodes can run anywhere.
The company made some changes that it also sent to the upstream project to work around those basic assumptions, Murray said.
It was among the new members of the Cloud Native Computing Foundation announced in February.
The Cloud Native Computing Foundation is a sponsor of The New Stack.