Is a Rust Rewrite Really Worth It?
The question, “Is a Rust rewrite really worth it?” is not just a tongue twister, it’s a dilemma keeping engineers awake at night. Will the potential performance gains be worth all the effort of wrestling with a fast-growing language that is extremely well loved but not (yet) all that commonly used?
Of course, there’s no simple answer to how well a Rust rewrite will ultimately pay off for your project and your team. But there are lots of captivating stories about why teams considered Rust rewrites, the path they took and how it ultimately worked out for them.
P99 CONF, a free virtual conference for engineers obsessed with all things performance, is turning into the premier venue for sharing these stories. The conference is dedicated to exploring how top engineers are solving their toughest high-performance, low-latency challenges. Not surprisingly, Rust always features quite prominently in the agenda. Last year’s most tweeted-about sessions included Bryan Cantrill’s whirlwind keynote on “Rust and the Future of Low-Latency Systems” and Glauber Costa’s deep dive into “Rust Is Safe. But Is It Fast?” which looked at async Rust development pitfalls that could affect your ability to hit that sweet, low p99.
In terms of Rust rewrite dilemmas, Brian Martin of Twitter was the star of last year’s P99 CONF — and we’re expecting Mark Gritter’s tale of avoiding a Rust rewrite from Go to generate an equal amount of interest and discussion this year. Here’s a quick look at both.
‘Whoops! I Rewrote It in Rust’ — Brian Martin, Software Engineer at Twitter
The scalability and efficiency of Twitter’s services are highly reliant on high-quality cache offerings.
They developed Pelikan as a caching system when Memcached and Redis didn’t fully meet their needs. Their No.1 priority for Pelikan was “best-in-class efficiency and predictability through latency-oriented design and lean implementation.” This was initially achieved with a C implementation. However, two subsequent projects introduced Rust into the framework with rather impressive development speed.
When they decided to add TLS support to Pelikan, Twitter software engineer Brian Martin suspected it could be done faster and more efficiently in Rust than in C. But to gain approval, the Rust implementation had to match (or beat) the performance of the C implementation.
Initial prototypes with the existing Rust-based Twemcache (the Twitter Memcached) didn’t look promising from the performance perspective; they yielded 25% to 50% higher P999 latency as well as 10% to15% slower throughputs. Even when Martin doubled down on optimizing the Rust prototype’s performance, he saw minimal impact. After yet more frustrating performance test results, he considered several different implementation approaches. Fortunately, just as he was weighing potential compromises, he came across a new storage design that made it easier to port the entire storage library over to Rust.
Martin went all in on Rust at that point with a simplified single-threaded application and all memory allocations managed in Rust. The result? The 100% Rust implementation not only delivered performance equal to — or exceeding — both the C implementation and memcached, it also improved the overall design and enabled coding with confidence, thanks to “awesome language features and tools,” which Martin then dived into.
Watch Brian’s full session here:
‘Taming Go’s Memory Usage and Avoiding a Rust Rewrite’ — Mark Gritter, Founding Engineer at Akita Software
Mark Gritter is currently on startup No. 4, having previously worked on streaming video at Kealia; VM-aware flash data storage at Tintri, where he was a co-founder; observability on the HashiCorp Vault team; and now API observability at Akita Software.
We’re eagerly anticipating hearing about his journey into — and out of — a Go trough of despair and learning why he decided to take the contrarian path of avoiding a Rust rewrite. You can catch his session and chat with him when P99 CONF 2022 goes live, October 19-20. Until then, here’s Gritter’s preview of what’s in store.
“Last summer, my team and I faced a question many young startups face: Should we rewrite our system in Rust?
“At the time of the decision, we were primarily writing in Go. I was working on an agent that passively watches network traffic, parses API calls and sends obfuscated summaries back to our service for analysis. As users were starting to run more traffic through us, memory usage by the agent grew to an unacceptably high level, impacting performance.
“This led me to spend 25 days in despair and immerse myself in the details of Go’s memory management, our technology stack and the profiling tools available — trying to get our memory footprint back under control. Go’s fully automatic memory management makes this no easy feat.
“Spoiler: I emerged victorious and our team still uses Go. In this talk, I’ll talk about key steps and lessons learned from my project. I intend this talk to be helpful for people curious about reducing their memory footprint in Go, or anybody wondering about the trade-offs of switching to or from Go.”
40+ Sessions at P99 CONF — Free and Virtual
P99CONF is not just a Rust conference. WebAssembly, databases, event streaming, Gom Java, Linux kernel, OpenTelemetry, eBPF, Kubernetes, chaos engineering, SLIs/SLOs — they’re all on the agenda.
On October 19-20, thousands of engineers will come together online to:
- Discover expert strategies from engineers at Uber, Lyft, Square, Google, Red Hat, Oracle, Redis, Microsoft and more.
- Explore performance strategies from multiple perspectives — programming languages, CPUs, frontends, backends, frameworks and observability.
- Learn from luminaries like Bryan Cantrill, Avi Kivity, Charity Majors, Liz Rice, Gil Tene, Ron Pressler, Armin Ronacher, Alex Hidalgo, Malte Ubl and Steven Rostedt.
Don’t miss this opportunity to join your peers for two days of concentrated learning, skill sharpening and community collaboration. Register for free.