4 Factors of a WebAssembly Native World
WebAssembly, abbreviated to Wasm, offers us an incredible opportunity to rethink the way we design and operate compute that goes beyond cloud and serverless. With WebAssembly, we can finally arrive at a pure “jobs to be done” approach to computing. This was the promise of serverless that was never realized.
When first introduced, serverless promised to run functions and logic while abstracting away all complexity. Unfortunately, serverless quickly departed from that pure vision. While it was possible to use a serverless service like AWS Lambda, developers and platform teams seeking DIY serverless ran into the tough realities of deploying, tuning and maintaining a complicated stack and set of supporting infrastructure required to run at scale.
Serverless was supposed to deliver on all the promises we want from WebAssembly, but it hasn’t worked out that way. Many cloud and serverless providers built their stacks on proprietary shims, limiting portability and forcing users to write functions that only ran on one platform. Multicloud serverless became a challenge, undercutting one of the core value propositions of the architectural convention.
In theory, serverless was supposed to be highly event-driven and therefore cost-effective, invoked only when needed. In practice, unpredictable scaling and challenging resource planning often translated into ballooning serverless bills. Customers were charged by cloud providers for slow cold starts, although the cold starts have been speeding up and costing less lately. More critically, the complexities of managing serverless logic has forced DevOps teams to create entirely new sets of tooling and to actively manage something that was supposed to be “shift left” simplicity.
Much of the server-side WebAssembly community considers WebAssembly to be a serverless redo (see “A Reckoning for Serverless” by Fermyon): a chance to rebuild serverless from the ground up. The risks remain, yet the pull of the same forces that corrupted serverless remain strong. What is required to ensure that WebAssembly doesn’t end up as serverless redux? Here are four key principles to keep it on track.
Being environmentally agnostic has been the long-held promise of containers and other abstraction layers. WebAssembly can actually get this right because the runtime is operating at a low level. And because it was originally designed to run in a browser (a construct that can literally run anywhere on anything), WebAssembly was designed to be an environment. This gives it a leg up on other previous attempts at 100% portable technologies, none of which truly lived up to the promise.
The critical part of sticking to this principle is designing strong conformance standards so that the core WebAssembly runtime engine and capabilities remain the same and similarly addressable across hardware, cloud, content delivery networks, edge compute, IoT and on user devices.
Abstractions That Developers Can Love
WebAssembly does not have any networking, storage or data opinions, and that’s a really good thing. Look no further than POSIX for reasons to keep WebAssembly purely compute. Originally created as a way to make operating systems more interoperable, POSIX evolved into a highly opinionated abstraction layer that passed judgements on security, scaling, API compatibility and more.
Underlying POSIX is the concept of a filesystem, which has become a major barrier to redesigning computing architectures for the distributed era. It keeps us locked inside the conceptual framework of files and directories.
The proposed WebAssembly Component Model promises to provide a set of abstractions that are more appropriate for modern applications and available on a strictly opt-in basis. Files will still be there when needed, but cloud native storage solutions, such as key-value and simple objects, will be provided through interfaces implemented by the runtime or another Wasm module.
Plugins as First-Class Citizens
This is a clear corollary to the first principle. In all other compute and runtime paradigms, strong opinions and built-in support for networking, data and other elements mean plugins (aka extensions) are rightly perceived as security risks. For this reason, they are rarely granted most-privileged status. This means that developers seeking to add functionality via plugins often face performance and scaling blocks due to baked-in assumptions and mechanisms designed to protect the core runtime and compute functionalities from bad actors.
Because WebAssembly is naive and a blank slate, extensions and plugins can be the primary mechanism for all sorts of functionality. They will, by default, operate from a least-privilege and highly secure starting point. By maintaining this full-access and full-capabilities status for plugins, WebAssembly is more likely to encourage development of robust ecosystems of plugins that will give the community the widest selection of integrations and capabilities.
Keep the Data Raw
Due to its origins in the old CPU-centric world, compute has continued to focus on executing on files via filesystems. Serverless attempted to break free of this limitation. But it remained tethered to the data-handling conventions of the higher-level languages in which a function was written.
WebAssembly does not carry the same burdens as serverless because it treats data as a raw object to be handled in the runtime with rules defined in the higher-level language but translated into the lower-level language via the WebAssembly compiler. A key benefit of this convention is the one-for-one in/out byte stream design, which maps perfectly to modern distributed computing designs like microservices.
The component model makes it easier still to keep data as raw objects not modified by any internal stand-alone logic in the runtime. Just like LEGO with a standard interface that can be snapped together, WebAssembly components include compiled WebAssembly code and the interface (functions, data structures, etc.) that allow it to interact with other components.
Components are infinitely composable and highly customizable. Together, components and extensions can deliver any required data functionality and handling in an easy-to-manage mechanism. This is why it’s important to make sure that WebAssembly is focused on processing raw data with rules defined in plugins or extensions and interfaces defined through component structures, rather than containing opinions in the runtime about how to interact with data.
As an aside, in this scenario many of the old constructs around data that add cruft and security flaws can fall away, leaving a faster data-handling approach that is secure by default.
Conclusion: Let WebAssembly Do the Job to Be Done
WebAssembly offers us a rare opportunity to rethink the way we build and operate applications to better reflect the modern imperative of distributed systems. The move toward containers, microservices and serverless was the first stage of a shift to “jobs to be done” computing. But those paradigms retained one foot in the old world and remained tethered to the filesystem, strict data handling and fat-kernel view of the universe.
Serverless came the closest to breaking free, but the gravity of the old way of building systems pulled it back into the dark shadow of lock-in with shims blocking portability and complexity blocking agility.
WebAssembly is young, fresh, and naive — thankfully. It’s also powerful in the clarity of its concept and design. Do a simple thing very well. Run compute in a sandbox. Everything else is added on by defining the job. Adhering to and enforcing these four factors will help ensure WebAssembly can break free of past constraints to redefine how we build technology in the near and distant future.