RISC-V: The Next Revolution in the Open Hardware Movement

RISC-V is an open standard instruction-set architecture for computer chips. RISC stands for “reduced instruction set computer.” Lately, this project has attracted a lot of attention. Why is there so much momentum around RISC-V? After all, it’s just another CPU instruction-set architecture (ISA), right? Well, in fact, it’s bigger than that. The true revolution lies in the open hardware movement, and RISC-V is its current spearhead.
For obvious reasons, the software world is a bit ahead of its hardware counterpart: There are no physical requirements or cost of building real things or big investments in hardware to deal with software innovation. That’s also why software might be the prophet of hardware’s future and why it’s crucial to remember what happened with the Free and Open Source movement.
Opening Software — the First Revolution

There’s no need to prove it nowadays: Open source software is everywhere and generating tons of value. The existence of the cloud is a direct product of that: AWS couldn’t exist as we know it without the Xen Project, and it’s the same for tons of different actors, from huge corporations to small companies. Should we even talk about the Linux kernel and its consequences? Or the $34 billion Red Hat acquisition by IBM?
But let’s go back to a time before it was obvious. Free and Open Source software isn’t new. In fact, it was the default choice in the 1950s and 1960s. Academics and corporations worked in collaboration, usually through public-domain software. To be fair, academia used to work like this with science data in general (sharing and collaborating), therefore it was logical to do the same with software. At the time, software wasn’t yet a commodity. However, more restrictions came in the ’70s, and it became even worse in the early ’80s. Because of that commodification, it became a real and lucrative business.
That’s when free software really appeared, organized initially with small online sharing communities and then officially through the free software movement in 1983, with the GNU Project. The revolution started. To succeed, however, you needed more than compilers or user-space tools. Linux brought that and more in 1991.
That was just the start. Open source could have been derailed in the early 2000s, when Microsoft was dominating the user/desktop space with Windows and Internet Explorer. Instead, the huge push of the internet drove innovation and created new business cases, standards, better tools to collaborate and shared libraries. Even huge corporations relied on open source and were contributing, which helped the communities grow. All of this and more made open source the default choice for many deployments.
Opening Hardware — the Second Revolution
Free/open source hardware is more recent than its software counterpart, having begun in the late ’90s. Obviously, free hardware means free/open hardware designs, with the same collaborative spirit.
In a similar way as software, it started small, with some companies working on electronics, small boards (eg. Arduino and Adafruit) or 3D printing. It feels a lot like when GNU existed without a kernel, doesn’t it?
That’s exactly where RISC-V enters: providing a very capable and open CPU architecture to power the whole stack with open components all around (software AND hardware). And it shouldn’t be a surprise to learn that RISC-V came initially from academia, more precisely at the University of California, Berkeley. Another similarity with open source software roots.
Timing is Key
Let’s be realistic: A new open ISA is not the only thing needed to make the open hardware revolution happen. It’s an enabler and a solution, but a solution to what? Success or failure is dictated by the real requirements in the field, the same way open source software answered various problems, like IT automation, hyper scalability, public or private cloud infrastructure and much more.
Remember the Arm integrator failure in data centers in 2011? Being right too early is being wrong. So does the RISC-V open source ISA make sense today? Could it be the next revolution? And why isn’t RISC-V just a new ISA?
In fact, there’s already a big shift coming in the future of IT architecture; the era when x86 was the unchallenged king is over. A manifestation of that is incarnated by the data processing unit (DPU). Because of existing limitations in x86 CPUs against the volume and specialized amount of data, we need more parallel-capable CPUs. A good example is your modern network card, offloading more and more features from your main processor. But the same is already happening with artificial intelligence or inference through GPU, or even storage via dedicated accelerators and similar, another job for a DPU.
Not convinced? Well, big players are convinced, like Nvidia, for example. Its Mellanox acquisition is clearly DPU-related, and Nvidia growth in the data center is its fastest-growing segment. This also explains why Arm is a critical choice for Nvidia’s future acquisition, because most current DPUs are Arm SoC-based, like the BlueField 2 DPU.
Note that we aren’t talking about entirely removing x86 from the data center. It’s mainly offloading some operations from it, where x86 can’t really shine, such as parallel workloads. But in the long run, if x86 can’t keep up at all, this might be on the menu too. And the data center is only a part of the bigger picture. RISC-V’s scope is far beyond this: edge computing, the embedded market and much more. These are other places where RISC-V could mature and ramp up technologically, while becoming more and more central in your data center.
In the end, RISC-V’s timing is greatly aligned with the big shift visible in the data center. You might argue that Arm is better suited to achieve all of this, and in fact it already partly did. But Arm’s ISA isn’t open, unlike RISC-V. You could indeed work with Arm to build your own solution, like AWS did with its Graviton 2. So what’s the big difference?
Unleashing Innovation
You could always build your own proprietary software and be better than your competitors, but the world has changed. Now almost everyone is standing on the shoulders of giants. When you need an operating system kernel for a new project, you can use Linux directly. No need to recreate a kernel from scratch, and you can also modify it for your own purpose (or write your drivers). You’ll be certain to rely on a broadly tested product because you are just one of a million users doing the same.
That would be exactly what relying on an open source CPU architecture could provide. No need to design things from scratch; you can innovate on top of the existing work and focus on what really matters to you, which is the value you are adding.
At the end of the day, it means lowering the barriers to innovate (no license, but sharing designs instead). Obviously, not everyone is able to design an entire CPU from scratch, and that’s the point: You can bring only what you need or even just enjoy new capabilities provided by the community, exactly the same way you do with open source software, from the kernel to languages. (See the Rust community as a modern example).
Hardware and Software Reunited
Another trend emerging in the past few years: more possibilities for people working at the interface between hardware and software. This is critical for multiple reasons, like security and performance. (Ask Bryan Cantrill about this, he’s been advocating for this for years.) You need some control on the hardware to be able to work efficiently on this interface. It’s not impossible in the x86 world, mainly because you can build everything around it, like dedicated cards to offload the main CPU, running your own optimized software (as mentioned before, DPUs, Nitro and such).
But to me the next step is even bigger, which is having only open source hardware and software in your whole infrastructure, from the main CPU to all co-processors up to the software. All of this means you can adapt software to your hardware and vice-versa, to deliver solutions that are relevant on the whole stack faster, with better power efficiency and better security too. The potential is limited only by your imagination and your capacity to execute, not by licenses or black-box designs.
Big Challenges = Big Opportunities
Obviously, there are some challenges on that revolution path. As for free software, it won’t start with everyone doing things in the open instantly. You can use your own RISC-V design to do whatever you like, even having intellectual property on it. Truly open hardware will become standard as it did in the software, but the road isn’t always straight.
Also, fragmentation could be a concern, and it must be taken seriously. Open doesn’t mean anarchy. Relying on standards, consensus and building things through RISC-V International, the foundation in charge of the whole RISC-V project, to work together is the right way. Regardless of what we think about x86, its success came by having platforms where the same code could work out of the box. Arm is trying to build some certifications and standards, but it’s really hard to avoid fragmentation. For example, as a virtualization platform, it’s relatively easy to get something that works and will run on mostly all x86 machines. But on Arm? It’s not just “porting” to Arm, it all depends on your target: Not all machines are equal, and even getting Xen running on a Raspberry Pi 4 was hard.
Being a Part of the Next IT Revolution
Here at Vates, we decided to participate at our level. We develop a virtualization platform called XCP-ng, with its management/backup board called Xen Orchestra.
So we decided to start porting Xen to RISC-V, but also be part of RISC-V. We are proud to announce that we are joining RISC-V international as a Strategic Member, allowing us to participate in the design for virtualization instructions and shaping RISC-V’s future.
Vates is also now in the official Xen list of RISC-V contributors, giving us the ability to work on both sides: RISC-V and Xen. Hardware and software. This unique position and skill set is laying an important foundation in the future of data-center infrastructure and beyond.
If you’re wondering who is already part of this adventure, we are not the first: Current members are Samsung, Qualcomm, Huawei, Western Digital, Alibaba Cloud, IBM, Nokia and many more. What are you waiting for?