TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Kubernetes

Kubernetes in the House: Enterprise Options for K8s on-Prem

A review of Kubernetes distributions for running within the corporate firewall.
Mar 13th, 2020 9:14am by
Featued image for: Kubernetes in the House: Enterprise Options for K8s on-Prem
Feature image by Ravi Shahi from Pixabay.
This post is the first installment of a two-part series on using Kubernetes in-house. Check back next week for the second installment.

Twain Taylor
Twain Taylor began his career at Google, where among other things he provided technical support for the AdWords team. Later, he built branded social media applications and automation scripts that help startups better manage their marketing operations. Today, he sheds light on how DevOps teams can change the way they build and ship applications. When away from his computer, he's playing bass guitar or looking for an excuse to get away on his motorcycle.

When you consider deploying Kubernetes on-premises, generally the first thought that comes to mind is “Kubernetes on AWS is hard enough, why would you want to deploy on-premises?” Well, there are actually quite a few reasons why organizations look to deploy Kubernetes in-house, the most common of which is compliance — where organizations dealing with sensitive information like patient records or credit card information have government restrictions in place that prevent them from using public clouds.

Other reasons may include organizations that want the essential benefits of cloud computing like scalability, on-demand services, agility, and elasticity without compromising on privacy or depending on a cloud vendor for these services. This approach also helps with future cloud compatibility since it significantly eases the process of migration to the cloud, just in case the organization wishes to go that route in the future. Additionally, this setup is also beneficial for organizations that keep some data on-site with a hybrid strategy.

Cloud Complexities

To successfully deploy a fully-functional Kubernetes cluster and achieve a cloud-like experience on-premises, we need to be able to deal with all the complexities that managed services go to great lengths to hide from us. These include, but are not limited to deployment automation, SDN management, storage management, load balancing, security, and authentication. With on-premises deployments, it’s also important to remember that you are responsible for “everything” and there’s no cloud elasticity to fall back on.

The challenge with Kubernetes is that it is difficult to set up and configure, to begin with, and continues to get more complex as you progress. This is probably why with the exception of global IT giants like Google, Netflix, and Facebook, or organizations with very large IT departments, organizations prefer to seek out “enterprise-ready” Kubernetes solutions. This is because it’s a lot of work and requires a skilled and dedicated team to develop and maintain internal cloud-native architecture.

K8s in the House

Though the Kubernetes landscape is pretty vast, we can more-or-less categorize the available solutions for running Kubernetes on-prem in three of four basic categories. The first one is PaaS solutions that have been modified or rewritten to work with Kubernetes, and hence, use a rather opinionated approach to manage the Kubernetes layer, usually with a predetermined application lifecycle management toolkit. The second category is Kubernetes distributions that can be deployed on-premises. These are solutions that focus on the Kubernetes layer as opposed to the application lifecycle.

Other options available include a class of tools that aren’t exactly PaaS and can rather be categorized as Kubernetes “aggregators” since they extend Kubernetes capabilities by providing various services on top of Kubernetes. These services could range from more complex operations like management and monitoring, to even simple functions like just building an observability layer on top of your cluster. There are also cloud-hosted Kubernetes solutions that are available on-premises that we will take a closer look at.

The PaaS Approach

While the advantage of this approach is that a lot of people are already familiar with these platforms and it helps escape the steep learning curve associated with Kubernetes, the downside is that these platforms are not Kubernetes-native. This is mostly because they’ve been around since before Kubernetes became king and have since had to be reworked to integrate.

Red Hat OpenShift is quite a popular choice and a good example of a PaaS reworked to integrate with Kubernetes. Originally built to use Heroku to deploy containers before being reworked to accommodate Docker, and then Kubernetes, Red Hat calls it a hybrid cloud, enterprise Kubernetes platform. For hybrid cloud deployments, in particular, OpenShift is especially tempting since it offers multiple options for networking, load balancing and service mesh, including support for Istio.

VMware Enterprise PKS is the next big contender in this category and it goes without saying that this is probably your first choice if you’re already on VMware. PKS is built atop Cloud Foundry’s container runtime called Kubo and basically runs Kubernetes on BOSH. Unlike OpenShift that has a more balanced approach between the cloud and the on-prem data center, PKS is more geared toward the latter. In addition to HA data center hosting and on-premises virtualization, PKS also offers a number of tools and integrations between its public cloud offerings and on-premises hybrid platform.

The ‘Kube-Native’ approach

Native Kubernetes distributions are probably the closest you can get to a vanilla Kubernetes implementation on-premises, which is why this approach is highly recommended. Not only do these platforms allow you to deploy Kubernetes across multiple environments, but they also offer you a singular, “cloud-like” control plane to manage your clusters. The added advantage here is that none of these platforms have had to integrate or “merge” with Kubernetes and hence forfeit absolutely no performance when it comes to implementing Kubernetes on-premises.

Rancher 2.0 is one of those options that kind of fits in a couple of categories since it was originally a container orchestration and management tool and has since been completely refactored to adopt Kubernetes. While it is a good example of a dev-centric Kubernetes platform, it also has a significant focus on application management. Unlike OpenShift that has a paid version and PKS that’s really more like a proprietary product, Rancher is 100% open source with a completely vendor-agnostic approach. Rancher also uses its own operating system called RancherOS to run all services inside containers.

Kublr is another interesting open source Kubernetes distribution that solely focuses on the Kubernetes layer. Kublr is certified by the Cloud Native Computing Foundation (which manages the Kubernetes code base), and in addition to supporting virtually any environment, it comes with a host of enterprise features like centralized multicluster management, centralized logging and monitoring, and even self-healing nodes. Additionally, Kublr supports air-gapped deployments, integration with RBAC, and IdM/AAA systems as standard features. It also ships with infrastructure provisioning and management capabilities that can be used across environments and in conjunction with cloud provider tools like Azure ARM or Cloudformation as well.

Canonical’s aim is to be to Kubernetes what its Ubuntu distribution is to Linux, and it seems to be on the right path. If you’re looking for no-frills, no-nonsense, vanilla upstream Kubernetes that’s tried and tested across clouds, on-premises data centers, bare metal, and VMs, Canonical Distribution of Kubernetes (CDK) is a pretty compelling option. Even more so since it’s deployed on Ubuntu Linux which a lot of people are familiar with. While CDK claims to support any cloud or on-premises deployment, it’s also available in a mini version called Microk8s which can be installed in a notebook.

In conclusion, while a lot of organizations no doubt see the safety in adopting the PaaS approach to deploying Kubernetes on-premises, it does involve significantly more moving parts than a native Kubernetes approach. More moving parts not only increases the security risk but also the general complexity of the environment. A pure Kubernetes approach is also more flexible, allowing teams to use tools of their choice, while also maintaining better compatibility with future advancements.

In the next post, we’re going to look at some best practices involved with deploying Kubernetes on-premises as well as some of the latest cloud “hosted” solutions that are now available on-premises.

The author of this post has done consulting work with Rancher and Kublr.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.