Developer Portals Can Abstract away Kubernetes Complexity

Developers and Kubernetes pros don’t speak the same language. Kubernetes is about clusters, nodes, control planes, pods, versions and namespaces. When developers say “deployments,” they don’t mean a Kubernetes object that manages the desired state of a set of replicas of a pod. But Kubernetes DevOps pros might think they mean just that. A DevOps engineer might interpret “deployment” as “running pods,” while a developer will mean a deployment run in a CI pipeline. Can we avoid this Tower of Babel dynamic?
It’s true that developers must have a basic understanding of Kubernetes concepts such as pods, services, deployments and replica sets. They should probably be familiar with the Kubernetes API and be able to use command-line tools such as kubectl to interact with the cluster. But there’s a limit to that. You can’t be an expert at everything, and there’s a price to pay in terms of cognitive load.
How can we help developers sort through tons of Kubernetes data to make sense of their application versions, states, replica counts and load?
Not Keeping It Simple: Common K8s Visibility Solutions
Do Kubernetes native CD solutions, such as Argo CD or Flux CD, or tools like Lens or Rancher provide a certain level of Kubernetes visibility to developers?
It isn’t that simple. In most cases, a ton of Kubernetes metadata is dumped inside, and the result floods the developer with unnecessary information. In addition, these tools usually present data about a single cluster and require some work to show multicluster data and maintain those views afterward.
Here’s an Argo CD example:
Although this graph provides great visibility, it can be very intimidating and is difficult to understand at a glance. Of all the different nodes in the graph, what represents the actual application code I’m running? How can I differentiate between my code and the additional infrastructure provided by K8s, which I as a developer have no control over? What is a good indicator that something with one of my microservices is wrong?
This view can’t be edited or filtered, making it harder for a developer to make sense of the graph and answer the questions listed above effectively.
Another option is to use a Kubernetes-specific tool such as Lens, K9S or Rancher. These tools provide a more streamlined and user-friendly interface for managing Kubernetes clusters. The tools do provide an improved terminal experience over the default kubectl CLI. However, they still require some knowledge and experience with Kubernetes to be use fully, making them too complex for developers who are encountering K8s for the first time.

Lens example
This shouldn’t be a surprise. These tools were built for DevOps. They were designed for DevOps infrastructure relationships and not for DevOps developer dialogue, not to speak of developer ownership.
Using Internal Developer Portals to Overcome K8s Complexity
You’ve probably heard about platform engineering, internal developer portals or both.
Platform engineering implements reusable and self-service capabilities with automated infrastructure operations. It optimizes the developer experience and drives developer productivity. It also shifts DevOps away from ticket Ops and into an ability to build a better platform for application delivery.
At the core of the platform engineering approach is an internal developer portal. The internal developer portal is where developers go to consume self-service actions built by the platform team, through a product-like interface.
Within the developer portal, and of much importance to the value it brings to the organization, lies the software catalog. This is where K8s data can be dumped, abstracted and visualized for developers.
Think of showing K8s data in the software catalog as “whitelisting” the data that is needed for developers while still allowing much more K8s data to be kept inside for other types of users. This additional data is valuable since DevOps also needs the software catalog.
The developer portal can contain any and all data you send to it, which might be too much if it’s not properly abstracted, modified and displayed properly to consumer-developers. Coincidentally, a high-quality developer portal gives you the exact tools to achieve the correct abstractions that fit developers, given their roles, experience and organization.
But wait! How does the K8s data populate in the software catalog? This is where we need a data model. Software catalogs begin with schema definitions for software catalog assets. In Backstage they are called “kinds” (there are six predefined ones: component, API, resource, system, domain and group), and in Port they are called blueprints.
Regardless of the name, this basic building block represents assets such as microservices, environments, clusters, packages, etc. These definitions matter because they let you create a data model that’s as opinionated as you are, and that matches the way the engineering organization really works. Once these schemas are defined, data gets populated into them, creating entities. In this case, we would be mapping and populating Kubernetes data.
Conventional wisdom streams all Kubernetes data to a given microservice. However, it is better to stream the data to entities belonging to blueprints that represent every logical unit or component in the K8s cluster to help make sense of the data, and that isn’t always a microservice. For instance, for a running cluster you can use a cluster entity, correlate it with all of the available namespace entities, which are neatly shown in a table, and see which services are deployed in each namespace.
In the example below, we can see how Kubernetes data is inserted into the right entities in the software catalog. Some data is reflected in a microservice, some in an environment and some in a running service entity. Showing the Kubernetes data in context makes it much easier to understand.
Let’s dive deeper into a running service view. It shows select K8s data that is relevant for the developer, but not all of the data. The things that the developer doesn’t care about have already been abstracted away.
The running service entity unifies data from many sources. The marked properties come from Kubernetes. Seeing information such as the comparison between current and wanted replicas can immediately help the developer understand if their service is healthy, if it is able to handle the current load and also if it crashes frequently. Fields such as the strategy make it easier to understand service availability when deploying a new version.
In addition, combining Kubernetes data with log URLs and other information provided to the running service from other sources paints a complete picture for the developer.
How Much Do Developers Need to Know about K8s Anyway?
The right answer to this question is “it depends.” Organizations vary, and there is no one-size-fits-all way to set up the software catalog. Platform engineers should understand the abstraction level that best fits their organization, depending on the level of developer Kubernetes expertise.
User personas vary too. Frontend engineers need different abstractions than infrastructure or backend teams. For example, a frontend engineer might only care about their microservice health status and maybe need a link to logs or S3 buckets containing artifacts, while a backend engineer will want to see the CPU and memory limits, the liveness probe of their instances and network policies.
A well-designed developer portal should allow you to create different abstractions for different types of personas or teams.
Using a Kubernetes Exporter
Let’s see how Port uses its Kubernetes exporter to reflect K8s metadata into a developer portal.
In general, ingesting metadata into the catalog requires data from various sources. Git provider data will be used to map multirepos, mono-repos to reflect microservices and to reflect GitOps operations within the developer portal. The same applies to tools such as Jira, PagerDuty, Snyk as well as cloud resources and CI/CD tools.
For Kubernetes we want to bring all the data supported by the K8s API to show running services, environments and more. Port provides an open source Kubernetes exporter that allows you to perform extract, transform, load (ETL) on data from K8s into the desired software catalog data model.
It is a Helm chart installed on the cluster. Once it is set up, it continues to sync changes, meaning that all changes, deletions or additions are accurately and automatically reflected in Port.
The helm chart uses a YAML configuration file to describe the ETL process to load data into the developer portal. The approach reflects a golden middle between an overly opinionated K8s visualization that might not work for everyone and a too-broad approach that could introduce unneeded complexity into the developer portal.
Extract: In the configuration of the K8s exporter, you can specify what K8s data you want to pull. Every object in the K8s API is supported, including CRDs. In the example above, we chose replicasets. To avoid data fatigue, you can specify a query followed by a filter pattern written in the jq programming language. In the query above, we decided to only bring namespace metadata that does not begin with kube, which are internal kubernetes namespaces.
- Transform: In this part, you can choose what data you want to report to your software catalog and perform jq manipulations to make sure it meets your desired representation in the developer portal.
- Load: This builds the object body followed by the object schema in the software catalog. It’s like a POST request with the body you designed for your REST API but for a developer portal.
Conclusion
The software catalog in the internal developer portal can support developers by showing the right amount of Kubernetes data that works for them, freeing them to be autonomous and productive. Doing this requires thinking about the data model in the software catalog and about what data developers need and want.
Try Port’s free version here.