AI-Aided Coding
On average, how much time do you think you save per week by an AI-powered coding assistant such as GitHub Copilot or JetBrains AI Assistant?
I don’t use an AI coding assistant.
Up to 1 hour/week
1-3 hours
3-5 hours
5-8 hours
More than 8 hours
I don’t save any time, yet!
Software Development

Mozilla Extends WebAssembly Beyond the Browser with WASI

Apr 2nd, 2019 3:00am by
Featued image for: Mozilla Extends WebAssembly Beyond the Browser with WASI

Mozilla’s WebAssembly started as an experiment, almost a question: is it possible to run arbitrary code in a browser without breaking the browser sandbox? A side benefit would be faster web applications that would outperform current web technologies, allowing developers to bring existing desktop applications to the web.

It was an experiment that arrived at the right time. Microsoft was deprecating its ActiveX plug-in model, as was Google with its NaCl tools. Old web development models were rapidly disappearing, washed away by a tide of security breaches and a lack of cross-platform compatibility.

Since its initial launch, WebAssembly has been adopted by all the major browsers, with support from Mozilla, Google, Microsoft, and Apple, who’ve all contributed code.

Best thought of as a definition of a virtual machine, WebAssembly works with the browser’s JavaScript engine to run code at speeds that compare well with native code. Instead of JavaScript’s byte code approach, it takes code written in familiar languages like C and C#, and converts it first to an assembly language-like bytecode before a final compilation as binary. WebAssembly executables are compiled before being delivered to browsers, making them a compact and efficient way of adding complex functionality to web applications.

There’s a lot to like around WebAssembly, and it’s already starting to influence thinking around popular web frameworks. Microsoft is experimenting with it as the foundation of Blazor, an in-browser extension of its ASP.NET Core application platform.

Other companies have been looking at taking it outside the browser altogether. With the increasing use of JavaScript-based tools like Node.js in serverless and other cloud architectures, it’s a move that makes sense – if only to bring familiar development environments to new ways of working.

Experiments with WebAssembly outside the browser are all very well, but if it’s going to be a tool that supports cross-platform as well as cross-browser development, it needs to have new standards built around it. Mozilla recently announced the start of such an effort, with the first release of WASI: the WebAssembly System Interface.

You can perhaps consider WASI as the boundary between system level code running on the host platform and applications running in user mode.

Where WebAssembly works as an implementation of a virtual processor, WASI goes a step further and offers developers an entire conceptual operating system. With a virtual processor, there’s only one target architecture, and the JavaScript engine can handle translation between its implementation and ARM, Intel, Power, or whatever hardware you have. WASI does the same, offering WebAssembly programs its own low-level implementations of common OS functions, that are then translated into OS calls via the host JavaScript engine. Target WASI in your code, and you’re able to produce applications that run identically on macOS, on Windows, on UNIX, and more, even on mobile operating systems.

A systems interface like WASI is a fundamental operating system concept. It’s how our applications use system calls, for example reading and writing files in a protected manner. The OS is protected from memory errors, and applications can guarantee that a read or a write can’t be corrupted by another application, or that the results of one call won’t be delivered to another application making the same call. You can perhaps consider WASI as the boundary between system level code running on the host platform and applications running in user mode.

It’s also important to note that you won’t be writing code that accesses the WASI interfaces directly. Instead, these will be what’s implemented in the WebAssembly equivalents of the standard libraries we use in most common languages. That way we’ll know that if we’re running a C application in WebAssembly through WASI a printf command will write to a console, no matter if it’s on Windows or UNIX. WASI implements the interfaces for WebAssembly compilers and the underlying JavaScript engine handles the actual system calls to whatever OS it’s running on. You don’t need to install the appropriate standard libraries for each target OS for your code, and you only need to compile once.

Using WASI as part of an Ahead of Time combination of compiler and runtime will reduce the overhead associated with current JavaScript engines. Code can be delivered to servers as needed, compiled by a WASI-aware WebAssembly compiler and then run on a JavaScript runtime that implements the WASI system interfaces. When code is called, there’s no need to invoke JIT compilers: the WebAssembly binaries will be loaded and ready to run.

There are already three implementations of WASI, Mozilla’s own implementation and a polyfill that will allow anyone to experiment with WASI in a browser. Perhaps the most interesting from a developer standpoint coming from edge delivery network Fastly. Its Lucet WebAssembly compiler is now also a runtime, with an open source release on GitHub. Currently used in Terrarium, Fastly’s experimental edge service, it’s seen as a fast alternative to JavaScript running on Google’s V8 engine.

In a blog post, Pat Hickey, a senior software engineer at Fastly, describes Lucet as able to “instantiate WebAssembly modules in under 50 microseconds, with just a few kilobytes of memory overhead. By comparison, Chromium’s V8 engine takes about 5 milliseconds, and tens of megabytes of memory overhead, to instantiate JavaScript or WebAssembly programs.”

A service like Lucet is an important tool for edge compute. Developers can write code in their language of choice, compile to WebAssembly, and then run at near-native speeds without having to know anything about the architecture of the underlying edge service. It also allows service providers to use heterogeneous hardware to roll out appropriate compute servers for location and for load. For example, ARM servers could be deployed where power is an issue, with Intel or AMD for more heavy duty workloads. There’s also the opportunity for a service provider to use it to experiment with alternative platforms, like RISC-V or Power, without disrupting workloads.

It’s easy to see a use for WASI at the edge of the network, handling offload compute for cloud services or speeding up message routing for IoT or for serverless services like CloudFlare’s Workers. There’s a lot to like in WASI, and open source implementations like Lucet should speed uptake considerably. By adding a new layer of abstraction to the virtual machine, it’s avoiding the trap of supporting native system interfaces in portable code, handling them in the JavaScript runtime where they belong. It’ll be interesting to see what’s built on it over the next couple of years — and, perhaps more interestingly, who builds it.

Feature image by Peter H from Pixabay.

Group Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.