In the first part of this series, we looked at a few common reasons why organizations deploy Kubernetes on-premises, along with some popular platforms that facilitate such deployments. In this post, we’re going to look at a few best practices involved with deploying Kubernetes on-premises, as well as the arrival of public cloud offerings like the Google Kubernetes Engine on-prem (GKE On-Prem).
In addition to the previously mentioned major factors that drive organizations to consider deploying K8s on-prem — i.e compliance, cloud abilities, and future compatibility — there are probably a couple more we should mention. These include organizations that want to use Kubernetes but don’t want to spend a large amount of money required to host it on a public cloud, and organizations deploying hybrid solutions.
Picking the Right Platform
Regardless of your reasons, make no mistake, deploying K8s on-prem is “all hands on deck,” in terms of management, and the first step to getting there is selecting the right “deck” for your deployment. The ability to deploy across multiple environments with a single control plane is a key capability to look for in a Kubernetes platform. This is because while it might seem easy at first, to manage a few clusters in a few different control planes, this becomes quite unsustainable when you start scaling up.
Number two on your checklist needs to be the ability to not only manage and provision infrastructure, but also the ability to integrate well with other on-premises components like networking, storage, monitoring, load balancers, and the like. Remember there’s no public cloud here, so your apps are completely dependent on your infrastructure and how well you manage it. Automating this layer is highly recommended as it makes for quicker, better deployments, as well as self-services. The good news is that most on-prem infrastructure solutions provide the same level of automation as their public cloud counterparts.
Other important factors to consider include operational simplicity and quality of vendor support, involvement and support for Open Source, degree of support for stateful applications, scalability, stability, and licensing costs if any.
DevSecOps from the Start
Now as opposed to going through with setting up storage, networking, and monitoring and then coming back to security, best practice dictates building it in right from the get-go. This is why as soon as you’ve picked your platform of choice, step two is to start thinking about security and governance. Integrating an image scanning process that scans applications, especially open-source components, libraries, and frameworks, during both the build and run phase is highly recommended.
Using older, more vulnerable versions of software is one of the leading causes of concerns with regards to container security. Implementing version control is a great way around this obstacle and though a lot of the solutions out there are cloud native, there are a few on-premises solutions as well, including a couple that are open source. Using the Center for Internet Security (CIS) benchmarks for Kubernetes runtimes is another best practice that helps establish secure configuration baselines. Additionally, SSL keys or database credentials need to be encrypted and stored centrally with Kubernetes secrets or a third-party Secrets Management service like Vault.
Storage to Suit
Best practice with regards to storage for Kubernetes on-premises involves first and foremost, choosing a storage solution that supports on-premises deployments and is compatible with microservices architecture. It’s also wise to avoid proprietary solutions and instead lean towards vendors closely integrated with Kubernetes. Additionally, selecting a storage solution that’s hardware-agnostic, but at the same time meets all the requirements of on-premises container storage and supports all standard interfaces (like the Container Storage Interface [CSI]) is critical.
For high availability on-premises, best practice dictates using redundant instances of all major components like etcd, scheduler, and the API server. For the etcd cluster, in particular, it’s advisable to maintain a dedicated cluster with at least five nodes to ensure both high availability and recoverability of the cluster. Lastly, ensure your storage solution has a distributed architecture and aligns well with container-native services.
Networking and Monitoring on-Premises
Deploying Kubernetes on-premises involves agility and portability and traditional networking just isn’t going to cut it. This is because as you scale up, you can’t have IT responsible for manually creating networks for each and every environment. Additionally, like with everything else, it’s important to ensure your networking solution is well integrated with Kubernetes and compatible container networking interface (CNI) integrated network overlay networks like Flannel. Deploying a service mesh like Istio that runs on-premises as well as in the cloud is critical to managing service to service communication once you start scaling.
The traditional method of monitoring metrics at a host-level needs to be replaced with modern solutions that are container-native and offer visibility into containers at an application level. Best practice is to favor tools that integrate with Kubernetes natively, feature automated service discovery, and provide recommendations in real-time, ideally with the help of a machine learning algorithm or AI analytics of some sort.
Public Clouds on-Premises
Hybrid clouds and on-premises deployments are gaining quite a lot of “buzz” around the enterprise of late and it’s pretty obvious the public cloud giants won’t be left out. This is probably why you can now use Amazon Web Services’ Outposts to use AWS services on-premises, including Amazon EKS which is a managed Kubernetes service. This setup is recommended for applications that require extremely low latency to on-prem systems. Similarly, Anthos from Google lets you use GKE on-premises, while the Azure Stack lets you use the Azure Kubernetes Service (AKS). While these are pretty good options if you have deep pockets and don’t want to get your hands dirty with Kubernetes, vendor lock-in is the obvious downside.
In conclusion, it’s infrastructure abstraction that’s the bottom line and the big reason behind organizations looking to deploy Kubernetes on-prem. While PaaS and public cloud solutions offer a lot of perceived safety and familiarity, it takes a Kubernetes-native solution to harness the full capabilities of Kubernetes on-premises, something like Kublr that focuses on Day 2 implementations or Canonical that serves upstream Kubernetes across platforms and environments.
The author of this post has done consulting work with Rancher and Kublr.
Amazon Web Services is a sponsor of The New Stack.