TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Containers / Kubernetes / Tech Life

Overcoming the Kubernetes Skills Gap in Edge Computing

Tuning and optimizing multicluster workloads is likely outside the expertise of most Kubernetes developers. Automated commercial offerings have evolved to fill the gap.
Apr 26th, 2022 3:00am by
Featued image for: Overcoming the Kubernetes Skills Gap in Edge Computing
Featured image via Pixabay.

Daniel Bartholomew
​​Daniel Bartholomew is co-founder and CTO at Section, an edge-as-a-service platform provider that empowers application engineers to accelerate their path to edge computing. Daniel has spent over 20 years in engineering leadership and technical consulting roles.

Kubernetes is now the most widely used orchestration platform, with nearly one-third (31%) of all backend developers using K8s, according to a recent study by the Cloud Native Computing Foundation and SlashData. Edge developers, in particular, are embracing Kubernetes, with an 11% increase in adoption over the past year, nearly three times the increase in the number of backend developers overall.

That finding should not be surprising, as there’s a natural synergy and correlation between containerization of applications and workload distribution at the edge. In simple terms, the lightweight portability of containers makes them ideally suited to distribution, while their abstraction facilitates deployment to heterogeneous networks of compute infrastructure. Moreover, Kubernetes adds the orchestration needed to best coordinate this sort of distributed multiregion, multicluster, multiprovider topology.

Organizations that have already adopted Kubernetes are thus primed to rapidly adopt modern edge deployments for their application workloads, and even those that are still at the single-cluster stage are in a position to rapidly leapfrog to the distributed edge.

Naturally, these organizations are hungry for developers with Kubernetes experience. The above CNCF research confirms this correlation between edge, containers and Kubernetes, noting developers working on edge computing having the highest usage of both containers (76%) and Kubernetes (63%) of surveyed segments.

But in this edge context, what does it mean to find and hire those who “know Kubernetes”?

Kubernetes at the Distributed Edge

This is not an idle question, as the vast majority of developers who know Kubernetes are adept at building and pushing containers to Kubernetes clusters using standard DevOps tools and workflows. Some specialization naturally occurs as the leading cloud vendors offer their own unique flavors of Kubernetes (EKS, GKE, AKS). But at a macro level, growth in K8s adoption facilitates this consumption model, with a natural proliferation of platforms offering consistent, familiar Kubernetes patterns.

However, when it comes to the distributed edge, knowing how to manage containers across Kubernetes clusters becomes increasingly complex. What happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? How do you decide which edge endpoints your code should be running on at any given time? More important, how do you manage the constant orchestration across these nodes among a heterogeneous makeup of infrastructure from a host of different providers?

The portion of Kubernetes developers that have a deep understanding of the underlying network operations involved in managing a distributed, multicluster topology is comparatively tiny (I would argue, only about 5% of the Kubernetes community). Few engineers that “know Kubernetes” can, for instance, have an informed conversation about which of the top networking plugins — Calico, Flannel, Cilium, etc. — to use on a particular Kubernetes cluster and why. These are complex topics that span across workload delivery (networking) and workload development (application) and are typically beyond the scope of what most developers are familiar with.

Specialization in the Developer Community

This isn’t particularly surprising. There is significant specialization happening in the developer community at the moment, where engineers tend to become extremely adept in specific areas. And most of that specialization, starting with the computer science curriculum over the last 10 years, is focused on higher-level languages, tools and applications.

For instance, a large number of engineers are highly skilled at working with JavaScript, React JS and similar languages and libraries to build websites. Similarly, there is a massive amount of machine learning specialization, allowing teams to become adept at pushing models into production. Few are focused on the interactions of applications, hardware and networking.

The cloud, of course, was supposed to make this need to understand the underlying infrastructure magically go away. Developers can simply deploy workloads to a managed K8s environment that abstracts the underlying infrastructure. However, while cloud providers might provide the infrastructure, they are not managing, troubleshooting or optimizing it in any way other than to keep it operational. Beyond that, you’re on your own.

In any cloud cluster, there’s a hierarchy of abstracted layers: physical networking, virtual networking and Kubernetes virtual networking. When you’ve got a tricky problem like packet loss, and you need to work out where in those three layers of abstraction that problem is occurring, the majority of engineers are not appropriately skilled to troubleshoot the issue. Similarly, tuning and optimizing multicluster workloads is likely outside the expertise of most Kubernetes developers.

Is this a problem? It certainly has been. Companies have struggled to build and retain teams with this specialized skill set. It’s also been a risk for those organizations, as those teams often are composed of just a few individuals, meaning that a critical understanding of the distributed environment can walk out the door at any moment.

Similarly, it has hindered edge adoption: Companies that could take advantage of the increased performance, decreased latency, improved resiliency, better scalability, decreased cost, workload isolation and other benefits of the distributed edge have shied away due to its “complexity.” In fact, Google’s Best Practices for Compute Engine Regions Selection specifically advises companies to stick with a single region due to complexity concerns, even though elsewhere it acknowledges the advantages of a multiregion deployment.

Thriving at the Edge

In many ways, this is compounded by the lack of deep knowledge among the broader Kubernetes community. Knowing how to orchestrate, tune and troubleshoot an edge deployment requires a specialized skill set. From an organizational standpoint, having the outcome of edge deployment is a competitive advantage, yet having the cost and management burden of building that skillset in-house is not.

However, one can argue that most Kubernetes developers shouldn’t have to know this stuff. And in that, I’d agree. Companies shouldn’t have to build bespoke teams and infrastructure for distributed multicluster deployments. And that shift away from in-house is exactly what we’re beginning to see in the edge market — the natural evolution of custom infrastructure and orchestration skillsets evolving into automated commercial offerings.

Ultimately, this is a good thing all around. Those engineers with expertise in Kubernetes networking and hardware configuration will find a home with the companies offering distributed-edge deployment platforms, allowing those companies to develop better, deeper, more comprehensive offerings than any bespoke system could provide. The vast majority of Kubernetes developers will continue as consumers of Kubernetes resources but will now be able to “upskill” their capabilities by using the same Kubernetes patterns to control edge deployment and optimization. This will ultimately allow organizations to become consumers of edge services while focusing on their core business, rather than building in-house expertise around distributed network operations.

This is exactly the evolution one would expect and precisely what’s needed to ensure Kubernetes continues to thrive at the edge.

This is one of the many discussions we’ll be holding at KubeCon EU 2022, to be held in Valencia, Spain, May 16-22.

 

KubeCon EU 2022

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.