5 Things to Consider When Building a Kubernetes Platform
Modern application development teams require fully managed and self-service platforms. The recent shift toward Kubernetes has seen many teams scramble to get platforms built over the orchestration tool. The need for internal development platforms was fulfilled by large engineering teams with bespoke tools built to satisfy very specific technical, process and people needs, for example, Devpod from Uber and Sunrise by Zalando. This approach is definitely worth emulating, but may be out of reach for a lot of engineering teams because building platforms comes at a heavy cost and is not a pragmatic expectation of all software teams.
A chief technology officer’s responsibility is to balance technical considerations with business goals. Your responsibility to deliver competitive advantage for your product is rooted in engineering tasks, but is often apparent as a function of business outcomes. A paved path to production does not often translate into one with apparent business benefits.
What follows is a list of priorities for the CTO office. This short list has been compiled based on discussions with several experienced technical leads and folks who are new to the role. Some assumptions: Containers are the de facto application delivery mechanism. The use of Kubernetes is considered strongly, or it is already in use. Cloud native tech makes up most of their pipeline.
- Fluid app delivery
An application platform must be able to provide development teams with high velocity. High velocity is a derivative of two factors, namely fast application delivery and short development cycles. Short development cycles come from reduction of the cognitive load of developers and native support for modern practices such as CI/CD. Application platforms must have support for build workflows that begin with source code. The platforms must also be able to support the deployment of applications in a repeatable way on any remote staging instance. Bonus points if it is able to incorporate existing workflows a team has, such as triggering tests and updating remote container registries.
- Polyglot support
Consistency is the hallmark of an application platform. The platform must support repeatable and reproducible builds on demand. What elevates the platform experience is the ability to extend the homogenous experience across all languages and frameworks. Preserving the same experience irrespective of the language used by the team helps support today’s needs of being able to author services in any programming language. If the platform supports a native build process in order to achieve this, the ability to extend and customize this build process is a crucial element contributing to the success of the platform. This factor comes to the fore when software engineering teams have some niche requirements and have to follow complex or incredibly specific steps to produce their container images.
Modern applications, with few exceptions, are data-driven. Platforms must broker the connections and consumption between applications and associated data services, in order to provide the promised efficiency for developers. The use of containers complicates this slightly, not to mention orchestrating them on Kubernetes. Relieving this burden by employing a service mesh or service broker strategy is a common way to address this issue and make their platform so much more compelling.
- Baked-in security
Containerized environments are secured very differently compared to traditional workloads. Due to the ephemeral nature of container workloads and the architecture they are built on (shared kernel, permissions, network behavior, etc), traditional methods of securing workloads will not work. Platforms built to work with containers and container orchestrators will therefore have to take into account all these changes. A fundamental best practice is to make use of compiled binaries that include all the required dependencies. The build process should also possess a directive to shed all components not required by the application to function. The security extended by the platform should also extend beyond building lean images. The platform must contribute to keeping container registries secure. Scanning images periodically is considered an important value addition. Signing all images is fast becoming a best practice. Setting up a zero-trust architecture among platform components that orchestrate deployments goes a long way in improving the security posture of the workloads.
- Adjustable abstractions
The tremendous success of Kubernetes, and its operational complexity, has rendered an urgent need to help abstract it. The high barrier to entry, coupled with an exhaustive interface, necessitates an abstraction in order to help facilitate the adoption of Kubernetes across the organization. Just to reiterate, not all teams can “work more, so that others can do less.” The ability to abstract Kubernetes primitives in order to make developer experiences better lies at the core of the platform’s purpose.
Crafting strongly opinionated platforms can sometimes be counterproductive. Developer experience is a fragile mix of technical needs and tribal nuance among software engineering teams. For software engineering teams, being able to tailor a platform to fit their present needs and evolve along with their future workflows are critical in continuing to make use of a tool. A platform that can offer paved paths, but also be agile to accommodate such needs of software engineering teams, has a higher likelihood of success.
Open source platforms score high in this regard, especially ones with modular architectures where one component can swap out for others that the team sees fit.
- Extrinsic factors
Several important features that are outside the realm of technical capabilities play a critical role in the success of a platform. The foremost among these is the strength of the community that exists in the periphery of the tool. Any developer tool that has been successful, enjoys a devoted following among developer audiences. A lot of positives for the tool are derived from this kind of cognitive surplus. Adopters can expect support and guidance from practitioners, along with large volumes of knowledge disseminated in a decentralized fashion.
Commercial aspects such as licensing and accountability sometimes play into the equation, especially in verticals requiring strict compliance. In these cases, being attached to commercial entities plays a role in the successful adoption of these platforms.
Examples of Platforms
A good starting point is to start with one of the many solutions already available in the market that would suit the needs of your organization. Some examples of available tools that provide an abstraction to improve the developer experience over Kubernetes are:
- RedHat Openshift – a unified platform to build, modernize and deploy applications at scale.
- Weaveworks GitOps – a free and open source continuous delivery product to run apps in any Kubernetes.
- Gimlet – a GitOps-based developer platform on top of the de-facto standards.
- Epinio – installs into any Kubernetes cluster to bring your application from source code to deployment.
- Cloud Foundry Korifi – the Cloud Foundry experience for cloud native workloads.
- D2IQ – makes it easier to build and run Kubernetes at scale.
- KubeFirst – a fully automated open source application delivery and infrastructure management GitOps platform.
- Qovery – a platform to easily duplicate your infrastructure and create production-like environments in AWS.
- Acorn – a simple application deployment framework for Kubernetes.
Conclusion: Self Evaluation
It’s encouraged that you create a checklist of the factors that matter the most to you and your organization before you embark on choosing a platform to enable a paved path to production excellence in Kubernetes. Here’s what a sample checklist might look like.
Your evaluations from an exercise such as this will hopefully make it easier for you to find a platform that is optimized to meet the needs of your engineering teams.