Performance Measured: How Good Is Your WebAssembly?

WebAssembly adoption is exploding. Almost every week at least one startup, SaaS vendor or established software platform provider is either beginning to offer Wasm tools or has already introduced Wasm options in its portfolio, it seems. But how can all of the different offerings compare performance-wise?
The good news is that given Wasm’s runtime simplicity, the actual performance at least for runtime can be compared directly among the different WebAssembly offerings. This direct comparison is certainly much easier to do when benchmarking distributed applications that run on or with Kubernetes, containers and microservices.
This means whether a Wasm application is running on a browser, an edge device or a server, the computing optimization that Wasm offers in each instance is end-to-end and, and its runtime environment is in a tunnel of sorts — obviously good for security — and not affected by the environments in which it runs as it runs directly on a machine level on the CPU.
Historically, Wasm has also been around for a while, before the World Wide Web Consortium (W3C) named it as a web standard in 2019, thus becoming the fourth web standard with HTML, CSS and JavaScript. But while web browser applications have represented Wasm’s central and historical use case, again, the point is that it is designed to run anywhere on a properly configured CPU.
In the case of a PaaS or SaaS service in which Wasm is used to optimize computing performance — whether it is running in a browser or not — the computing optimization that Wasm offers in runtime can be measured directly between the different options.
At least one application is increasingly being adopted that can be used to benchmark the different runtimes, compilers and JITs of the different versions of Wasm: libsodium. While anecdotal, this writer has contacted at least 10 firms that have used it or know of it.
Libsodium consists of a library for encryption, decryption, signatures, password hashing and other security-related applications. Its maintainers describe it in the documentation as a portable, cross-compilable, installable and packageable fork of NaCl, with an extended API to improve usability.
Since its introduction, the libsodium benchmark has been widely used to measure to pick the best runtimes, a cryptography engineer Frank Denis, said. Libsodium includes 70 tests, covering a large number of optimizations code generators can implement, Denis noted. None of these tests perform any kind of I/O (disk or network), so they are actually measuring the real efficiency of compilers and runtimes, in a platform-independent way. “Runtimes would rank the same on a local laptop and on a server in the cloud,” Denis said.
Indeed, libsodium is worthwhile for testing some Wasm applications, Fermyon Technologies CEO and co-founder Matt Butcher told the New Stack. “Any good benchmark tool has three desirable characteristics: It must be repeatable, fair (or unbiased toward a particular runtime), and reflective of production usage,” Butcher said. “Libsodium is an excellent candidate for benchmarking. Not only is cryptography itself a proper use case, but the algorithms used in cryptography will suss out the true compute characteristics of a runtime.”
Libsodium is also worthwhile for testing some Wasm environments because it includes benchmarking tasks with a wide range of different requirement profiles, some probing for raw CPU or memory performance, while others check for more nuanced performance profiles,” Torsten Volk, an analyst for Enterprise Management Associates (EMA), told The New Stack. “The current results show the suite’s ability to reveal significant differences in performance between the various runtimes, both for compiled languages and for interpreted ones,” Volk said. “Comparing this to the performance of apps that run directly on the operating system, without WASM in the middle, provides us with a good idea of the potential for future optimization of these runtimes.”
True Specs
In a blog post. Denis described how different Wasm apps were benchmarked in tests he completed. They included:
- Iwasm, which is part of the WAMR (“WebAssembly micro runtime”) package — pre-compiled files downloaded from their repository.
- Wasm2c, included in the Zig source code for bootstrapping the compiler.
- Wasmer 3.0, installed using the command shown on their website. The three backends have been individually tested.
- Wasmtime 4.0, compiled from source.
- Node 18.7.0 installed via the Ubuntu package.
- Bun 0.3.0, installed via the command show on their website.
- Wazero from git rev 796fca4689be6, compiled from source.
Which one came out on top in the runtime tests? Iwasm, which is part of WebAssembly Micro Runtime (WAMR), according to Denis’ results. The iwasm VM core is used to run WASM applications. It supports interpreter mode, ahead-of-time compilation (AOT) mode and just-in-time compilation (JIT) modes, LLVM JIT and Fast JIT, according to the project’s documentation.
This does not mean that iwasm wins accolades for simplicity of use. “Compared to other options, [iwasm] is intimidating,” Denis wrote. “It feels like a kitchen sink, including disparate components.” These include IDE integration, an application framework library’s remote management and an SDK “that makes it appear as a complicated solution to simple problems. The documentation is also a little bit messy and overwhelming,” Denis writes.
Runtime Isn’t Everything
Other benchmarks exist to gauge the differences in performance among different Wasm alternatives. Test alternatives that Denis communicated include:
- sightglass: for Wasmtime and Cranelift.
- PSPDFKit: (that targets WebAssembly in web browsers).
- Wasmedge’s benchmark suite.
However, benchmarking runtime performance is not an essential metric for all WebAssembly applications. Other test alternatives exist to test different Wasm runtimes that focus on very specific tasks, such as calculating the Fibonacci sequence, sorting data arrays or summing up integers, Volk noted. There are other more comprehensive benchmarks consisting of the analysis of entire use cases, such as video processing, editing of PDF, or even deep learning-based object recognition, Volk said.
“Wasm comes with the potential of delivering near-instant startup and scalability and can therefore be used for the cost-effective provisioning and scaling of network bandwidth and functional capabilities,” Volk said. “Evaluating this rapid startup capability based on specific use case requirements can show the direct impact of a Wasm runtime on the end-user experience.”
Some Wasm applications are used in networking to improve latency. Runtime performance is important, of course, but it is the latency performance that counts in this case, Sehyo Chang, chief technology officer at InfinyOn, said. This is because, Chang said, latency plays a crucial role in determining the overall user experience in any application. “A slow response time can greatly impact user engagement and lead to dissatisfaction, potentially resulting in lost sales opportunities,” Chang said.
During a recent KubeCon + CloudNativeCon conference, Chang gave a talk about using Wasm to replace Kafka for lower latency data streaming. Streaming technology based on Java, like Kafka, can experience high latency due to Garbage collection and JVM, Chang said. However, using WebAssembly (WASM) technology allows for stream processing without these penalties, resulting in a significant reduction of latency while also providing more flexibility and security, Chang said.