Can WebAssembly Solve Serverless’s Problems?
Serverless computing continues to grow in demand for a range of use cases, for those organizations seeking to create and run applications with a relatively minimal amount of infrastructure management involved. This is good news for the startup seeking to offer software applications, services or both without making significant investments in on-premises servers, or having to configure or manage their own infrastructure through a cloud vendor.
Just one step removed from adding an API or service on top of prebuilt Software as a Service (SaaS) platforms, a serverless alternative allows organizations to begin offering their own business service or application with a minimal amount of overhead and fewer administrative and management tasks for maintenance.
These advantages help to account for why the serverless market is expected to quadruple in value from $10 billion in 2021 to over $40 billion in 2026, according to analyst firm Omdia.
Serverless helps to perpetuate the myth of the next AirBnb or SaaS provider success story in many ways. You might picture the lone startup founder who logs onto a cloud service with their laptop and begins building their business after creating a serverless account. (If this sounds too good to be true, that’s because it is in many ways.)
Regardless, serverless should be better than it is for several reasons. Besides security challenges associated with sharing policy and data and network protection with the cloud vendor, serverless’ drawbacks include — but are not limited to — latency and vendor lock-in issues for many organizations.
Overcoming these disadvantages is how WebAssembly, or Wasm — at least in theory — really shines. Its runtime structure is designed to run directly on the CPU in order to offer a more direct way to run the same application and code distributed on containers or on different devices and environments (think edge computing).
However, the problem is that serverless is generally equated with vendor lock-in. Today’s typical third-party use cases mean that serverless will require the support of a third party, which is more often than not a cloud vendor.
The organization thus must be content to entrust its several infrastructures not with multiple vendors, but with one third-party cloud provider, to administer its critical apps in many cases. For this reason alone, the avoidance of vendor lock-in is a key Wasm selling point.
“WebAssembly has the potential of fixing the one big flaw of serverless computing today: vendor-lock in,” Torsten Volk, an analyst at Enterprise Management Associates (EMA), told The New Stack. “As organizations adopt numerous clouds, WebAssembly could offer a turnkey way to run on and integrate with all of them, equally well. Wouldn’t that be great?”
Serverless Is the Beginning
“We know Wasm’s heritage in the browser but we also know that the properties that make Wasm work brilliantly in the browser, make it run just as well in the cloud, at the edge, on devices — anywhere, in fact,” said Liam Randall, CEO and co-founder of Cosmonic.
Thanks to Wasm’s runtime efficiency, an organization can potentially have a carte blanche to create, deploy and manage applications in a serverless environment without having to manage infrastructure. In theory, at least, Wasm should offer superior runtime performance and lower latency than those applications running in serverless environments on servers managed by a cloud vendor. Relying on a Platform as a Service (PaaS) solution through an API and the process becomes even that much easier.
“Wasm definitely has the potential to become the ‘next big thing’ in application platforms, as it has a ton of potential in terms of near-instant startup time, super portable runtime that works consistently across operating systems and cloud platforms, and tight security that is completely based on zero trust authentication. It’s all very exciting,” EMA’s Volk said.
Indeed, traditional serverless functions require 200 milliseconds “or more just to start,” Butcher said. “That’s 200 milliseconds gone before your code even begins to execute. With Wasm, we’ve been able to get this startup time to under a millisecond. Add to that an exceptional developer experience and you’ve got a compelling case to move away from Lambda.”
Wasm computing structure is designed in such a way that it has “shifted” the potential of the serverless landscape, Butcher said. This is due, he said, to WebAssemby’s nearly instant startup times, small binary sizes, and platform and architectural neutrality, as Wasm binaries can be executed with a fraction of the resources required to run today’s serverless infrastructure.
“Contrasted with heavyweight [virtual machines] and middleweight containers, I like to think of Wasm as the lightweight cloud compute platform,” he noted. “Developers package up only the bare essentials: a Wasm binary and perhaps a few supporting files. And the Wasm runtime takes care of the rest.”
An immediate benefit of relying on Wasm’s runtime for serverless is lower latency, especially when extending Wasm’s reach not only beyond the browser but away from the cloud. This is because it can be distributed directly to and on edge devices with relatively low data-to-transfer and computing overhead.
“Serverless computing is great for really specific use cases. For example, where the biggest priority is a cloud provider managing the infrastructure for the user,” Randall said. “In reality, though, applications can run quicker and more efficiently if we design them expressly for edge or [Internet of Things] use cases. Running applications closer to the user reduces latency and data transmitted over a network, resulting in a better user experience and a lower cost for the developer.”
“Wasm is a new iteration of the ‘write once, run anywhere’ mantra. The same binary can run on Windows, Linux, or macOS. It can run on Intel or Arm architectures, and even on more exotic OSes and hardware profiles,” Butcher said. “This is a key ingredient for success on edge as well as in the cloud: The same application can be moved to the location best suited to the user’s needs.”
In the immediate future, some organizations will likely still opt for tried-and-tested serverless options, despite the computing-performance benefits and lower latency that WebAssembly offers. This is largely due to how we are still in the early days of Wasm beyond its original browser use case.
“With WebAssembly, you may need to manage your infrastructure, including servers and networking, which can add complexity and cost to your deployment, assuming that support for Wasm in Kubernetes and other orchestrators cannot adopt Wasm-friendly runtimes more quickly,” Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation (CNCF), told The New Stack in an email response.
“While many tools and frameworks are available for WebAssembly (thanks to the collaborative and vibrant ecosystem), they may not be as mature or robust as those for traditional serverless platforms, which can impact development speed and ease of use.”
In the CNCF’s recently released annual survey and report, the foundation stated that “containers are the new normal and Webassembly is the future.” The report also stated how “with containers now mainstream, in 2022, the uptake of serverless architecture is setting the stage for WebAssembly, which was asked about for the first time in this survey.”
However, while Wasm creates a unified and secure runtime for applications, it “still needs a clear path for an orchestration framework,” Dolezal said. For that, runwasi, which is being built within the containerd project, to facilitate running Wasm/WASI workloads managed by containerd, Dolezal said.
Volk, however, urged caution. “Runwasi is a promising and vital project with some good momentum, but we have to remember the warning displayed at the top of its GitHub repo: ‘Alpha quality software, do not use in production.’” he said. “We find a very similar warning in the WAGI repository as well, and Fermyon’s very exciting but still experimental Python SDK was only launched a couple of days ago.
“Once we can confidently run Python applications in Wasm and running on containerd within Kubernetes PODs, we will have a truly cloud-independent enterprise-grade serverless application platform.”
At the end of the day, as the CNCF representatives have indicated, “serverless functions and Wasm are the combination we need for this evolutionary cycle of cloud. We’ve now built a complimentary suite with VMs, containers, and Wasm so we probably don’t need a new ‘Kubernetes for Wasm,’” Butcher said.
“We have seen existing orchestrators like HashiCorp Nomad do a remarkable job already. And I am greatly encouraged (and pleasantly surprised) by the increased Wasm momentum within the Kubernetes community, so it may well be that Wasm becomes just a drop-in addition to our existing cloud native ecosystem.”
Learn more about what’s new in Wasm from this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon North America in October.