Can Kubernetes Solve WebAssembly’s Component Challenges?
The promise of WebAssembly (Wasm) hasn’t yet been realized. Wasm is supposed to allow you to deploy application code once in the language of your choice across multiple environments and device types, using a host capable of running CPU instruction sets.
While Wasm has been shown to work in production very well in the browser and for certain targeted server deployments, a standardized component model that allows developers to “deploy once and deploy everywhere” has yet to be realized — although that milestone could be achieved as early as next year.
This will happen when a developer can load code into a Wasm module and deploy it simultaneously across various environments and device types that are capable of running CPU instruction sets.
More specifically, the open source community is also hard at work developing Wasi, which is the standard interface or API in many ways linking the Wasm modules to the components. But, again, we’re not there yet.
And then there’s Kubernetes.
Containers and Kubernetes environments are largely ready for Wasm module deployments, and Wasm modules are largely ready for deployment on Kubernetes. Despite initial skepticism and even discussions about the possibility of Wasm one day replacing containers or even Kubernetes, a very good Wasm and Kubernetes fit is emerging.
The Advantages of Using Wasm with Kubernetes
Using Wasm with Kubernetes has some built-in advantages. For instance: security. Because Wasm binaries’ cold-start times are measured in milliseconds, versus possibly minutes for some virtual machines, Wasm’s security model is actually a little bit stronger than that for containers and Kubernetes. This is because there isn’t immediate access to the Linux kernel.
All code is mediated through the Wasm host runtime, which means you can intercept all the system calls — at least in theory. In other words, Wasm can offer an additional layer of security within the container and Kubernetes cluster.
This advancement is largely due to containerd support for Wasm as well as Docker’s introduction of beta support for Wasm in 2022. It has since served as a main facilitator of Kubernetes’ ability to support highly distributed deployments and allow users to power up and down applications consisting of Wasm modules at will.
The use of containerd plays an important role as well; container shims are processes that help to integrate the container with the runtime code.
“The work done by Microsoft and many others to add Wasm shims (such as the Spin shim) to the containerd project is what unlocks Wasm on both Docker Desktop and many Kubernetes distributions,” Matt Butcher, Fermyon co-founder and CEO, told The New Stack in an online conversation.
Docker Desktop and Microsoft Azure AKS both led the way in exemplifying how this is done, Butcher said. More recently, he noted, Civo has introduced support in its Kubernetes offering, “illustrating that cloud providers large and small are facilitating the shift in favor of WebAssembly.”
Wasm and OpenShift
Other software makers and service providers are jumping on board the Kubernetes train for Wasm. They include Red Hat, which is already adapting OpenShift to accommodate Wasm modules and support Ferymon’s Spin. Red Hat sees Wasm as an interesting approach for cross-platform development and contributes to the related upstream communities.
As of today, Kubernetes provides the orchestration and infrastructure needed to run Wasm-based workloads, which provides an additional level of flexibility to existing Kubernetes investments, Ivan Font, principal software engineer at Red Hat, told The New Stack.
As of now, there is no productization of Wasm within Red Hat’s platforms. But the company says it will continue to collaborate with other vendors and communities to develop its potential in line with what user organizations need.
Red Hat is developing Spin to work on OpenShift, as well as contributing to the development of Wasi (the Wasm and component interface) and WasmEdge, which is an extensible Wasm runtime created for cloud native (Kubernetes, of course), edge and decentralized applications. WasmEdge also powers serverless apps, embedded functions, microservices, smart contracts and IoT devices, according to WasmEdge documentation.
As it stands now, Red Hat’s OpenShift defaults toward WasmEdge because there’s already a Red Hat Package Manager (RPM) supported on the Fedora Linux distro for it, and there’s added support that Red Hat provides for Wasm.
To run a particular workload as a Wasm-based workload for execution on OpenShift, you would currently need to specify an annotation, indicating what you want to do. This execution is done within a container, but it has unique characteristics.
When the Wasm application is packed, it’s just a module within an image. This means the Open Container Initiative (OCI) container image doesn’t include any external dependencies or a complete operating system file system. Consequently, the image sizes are very small, because they only contain your Wasm module. This applies to containers in general, and OCI is a standard in this context, Font said.
Fermyon has been working with Liquid Reply, creators of KWasm, as well as Red Hat to bring a level of parity between OpenShift’s Wasm capabilities and that of containerd-based Kubernetes distributions, Butcher said. The collaboration, he said, extends “from the enterprise-scale AKS to the tiny K3s.”
More Tools for Wasm and Kubernetes Ahead
There will be more tools available for developers to build and deploy applications on Kubernetes clusters, Saiyam Pathak, field CTO at Civo and a Cloud Native Computing Foundation Ambassador, told The New Stack.
“If you have a Kubernetes cluster, you can simply configure a node to make it WebAssembly ready,” Pathak said.
The process is straightforward, Pathak said: It involves ensuring everything is configured correctly, including wasm runtime, adding the runwasi shims, editing containerd.toml file and restarting containerd on that particular node. ““This is fantastic because you can now use the same tooling and deployment processes that have been in use for the past 10 years to take advantage of the latest WebAssembly technology for your next set of applications,” Pathak said. “Whether you are building an API or extending your app, you can work with WebAssembly within the same infrastructure and Kubernetes cluster, alongside Docker.”
Kasten, which offers a Kubernetes data management platform and disaster-recovery support, has its eyes on the utility of Wasm for its Kasten K10 platform’s technical capabilities and in support of its customers. Wasm is not the comprehensive solution for data mobility support, but it’s something Kasten is looking at for Kubernetes, as Kasten explores how it can use Wasm inside of K10.
“Being a Kubernetes-native app, we’re exploring how we could leverage WebAssembly to streamline and make things faster, more efficient and secure: all of the benefits that you get from Wasm itself,” Michael Cade, global field CTO at Veeam, owner of Kasten, told The New Stack. “But is WebAssembly the answer to everything? Well, no.”
Comparatively, virtual machines are not the answer to everything, either, Cade said. “If I’ve got a physical hardware card, in a physical machine, on one of my most important application servers, I may or may not virtualize that. And if not, I’m never going to be able to containerize it.”
Where WebAssembly does thrive, especially for Kubernetes, is around the three S’s: speed, security and being supported already by most of the web frontend servers or web modules.”
RunWasi: A Catalyst for Progress
The progress made with the open source RunWasi project could be a catalyst for Wasm-on-Kubernetes deployments. RunWasi was created to support Wasm runtime in Wasm modules through containerd.
The deployment process for runtimes is completed using containerd shims, with RunWasi providing the necessary code. These shims orchestrate the Wasm module from containerd to the low-level runtime where the code is deployed.
The following list shows popular Wasm containerd shims, courtesy of Microsoft’s Deis Labs:
- Lunatic, an Earlang-inspired runtime for fast, robust and scalable server-side Wasm applications.
- Spin, a developer tool for building and running serverless Wasm applications.
- Slight, a Wasmtime-based runtime for running Wasm applications that use SpiderLightning (WASI-Cloud-Core) capabilities.
- Wasm Workers Server, a tool to develop and run serverless applications on top of Wasm.
At Docker’s annual user conference in early October, Nigel Poulton, an author and software trainer, showed and described how he used Spin as the Wasm framework to create a Wasm artifact for an app inside a Wasm module, which was then packaged into a Docker container. He also described how he set up a multinode Kubernetes cluster with a control plane node and two workers.
Crucially, Poulton described how he had the “necessary software to run these Wasm workloads, and it’s all straightforward containerd stuff.”