Serverless WebAssembly for Browser Developers
WebAssembly (often shortened to Wasm) was built for the web browser. But oftentimes, a technology grows beyond the intentions of its creators. Wasm is an excellent example. And one place where Wasm shows promise is in the cloud. It is a fantastic platform for running serverless functions.
In this article, we’ll take a look at the design intent for Wasm, and how it improved the browser. Then we’ll see how Wasm made the leap to a generic technology.
Finally, we’ll take a look at a particular problem in the cloud — executing serverless functions — and see how Wasm can solve that problem.
Move Over Applets, Flash and Silverlight!
“WebAssembly… defines a portable, size- and load-time-efficient format and execution model specifically designed to serve as a compilation target for the web.”
This was not the first time something like this had been attempted. A few other languages have been added to (and later removed from) the browser. Java Applets were first. VBScript had a brief life in Microsoft browsers. Silverlight and Adobe Flash also came and went. But this time, the Mozilla folks had done a few things differently:
Instead of supporting a language, they defined a binary format that existing languages could compile to.
Instead of going it alone, Mozilla joined forces with Google, Microsoft, Apple and others. And they committed to doing the work under the auspices of the W3C.
Instead of focusing on augmenting the user interface (as Flash, Silverlight and Java had done), Wasm was focused on library usage and sharing code.
The use cases that the Wasm team sought to solve weren’t widgets and streaming players, but porting over code from other places to the browser. For example, consider that crufty old C library that has dutifully done some important job for years, but that everyone is afraid to touch for fear of breaking it. Could that be compiled to Wasm to enjoy new usage in the browser? Or consider that complex problem that demands highly efficient computing, like graphics processing. High-performance code might more easily be expressed in a language like Rust. Wasm allows developers to write code in the language a developer prefers, and then use it within the confines of the browser.
Wasm is in that moment now.
It has been successful in the browser. But people are finding a variety of other uses for Wasm. From compiler toolchains to user-defined functions in a database, Wasm is popping up in some notable spaces.
- SingleStore uses Wasm for user-defined functions inside of their database.
- The Zig language folks recently announced they “annihilated” 80k lines of C++ code by using Wasm to self-host the compiler.
- Docker announced support for running Wasm inside of Docker Desktop, while Microsoft now supports running Wasm inside of Kubernetes clusters.
But there is one use case that I find particularly exciting. WebAssembly seems like an excellent fit for cloud computing. To understand why, let’s start by looking at one of the core technologies of today’s cloud: serverless functions.
Serverless Functions v1
Serverless functions, sometimes called Functions as a Service (FaaS), are intended to provide an easy way to create small cloud services. It’s easiest to understand a serverless function by contrasting it with a server. Web server software listens for HTTP requests on a socket, parses each request and then handles the requests. During a web server’s process lifetime, it may handle hundreds of thousands of separate HTTP requests. The typical HTTP server also must manage SSL connections, system processes, thread pooling and concurrency, and a variety of other lower-level tasks, all in service of answering HTTP requests.
A serverless function is designed to strip away as much of that “server-ness” as possible. Instead, the developer who writes a serverless function should be able to focus on just one thing: Respond to an HTTP request. There’s no networking, no SSL configuration, and no request thread pool management — all of that is handled by the platform. A serverless function starts up, answers one request and then shuts down.
This compact design not only reduces the amount of code we have to write, but it also reduces the operational complexity of running our serverless functions. We don’t have to keep our HTTP or SSL libraries up to date, because we don’t manage those things directly. The platform does. Everything from error handling to upgrades should be — and, in fact, is — easier.
Given this relentless focus on simplicity, it is no wonder that 4.2 million developers say they have written at least one serverless function. Amazon reports that they execute 10 trillion (yeah, that’s 10 trillion!) serverless functions a month.
As enticing as the programming paradigm is, though, the early iterations of serverless functions suffered from several drawbacks. They were slow to start. The experience of packaging a serverless function and deploying it was cumbersome. Debugging and troubleshooting were difficult. Yet the reason behind these problems is at once easy to understand and surprising.
This brilliant new idea of serverless functions was running on top of the wrong technology stack — lumbering virtual machines. Per-language runtimes and package managers. Cloud infrastructure built for a different class of computing was being repurposed for a technology that, in hindsight, it was ill-suited for.
It turns out that the technology we needed to bump serverless from good to great was living in the browser. We just needed to pluck Wasm from there and plant it in the cloud.
What Wasm Does for Serverless
If we are trying to improve the state of serverless functions, there are some high-priority bits that need improvement. We need the serverless environment to be blazingly fast, ultra secure, and we want it to hide as much of the “server” details as we can. That is, the second we have to start asking users to pick the operating system or the CPU type, we’re forcing the user into making server decisions, instead of serverless decisions. And when it comes to deploying serverless functions, smaller binaries in well-defined package formats make it much easier for us to do releases.
It is here that Wasm’s heritage makes it perfect for serverless functions. When we talk about Wasm as “built for the browser,” we are really talking about a few key features that make Wasm a good fit for the browser model:
- Fast startup time. Nobody wants to wait for a page to load.
- Cross-architecture, cross-operating system. Gone are the days when “Internet Explorer is required to view this page.”
- Compact binaries. When we’re moving our code across the internet, we don’t want to be sending big files.
- Secure sandbox. A browser runs untrusted code on a daily basis. We rely on the browser to protect us from both bugs and hackers.
Those four features just so happen to be desired traits for a serverless functions platform. We want zero-latency startup time. We don’t want to know or care about the architecture or operating system on which our function runs. (That’s the joy of serverless, right? We don’t have to care one iota about the server underneath!) We want our binaries to be compact so we can quickly package and upload them. And we want to know if it is safe to run our function in a multitenant cloud.
A serverless functions platform that runs on Wasm would make it easy to build a huge variety of applications, including highly responsive HTTP apps, and then deploy them with a high degree of confidence. This is exactly the use case we at Fermyon had in mind when we built the open source Spin framework.
Many of our go-to technologies started in a niche and grew into general-purpose tools. Wasm is going through such a transition now, as we find new applications beyond the Web browser. Serverless functions have enjoyed much success already. But to leap forward, the technology needs underpinnings that are faster and more robust. Wasm is just such a technology.