Google’s Quantum Computer Can Exponentially Suppress Errors

Whether it’s tackling the climate crisis, building an unhackable Internet or developing novel machine learning algorithms, the world is facing increasingly complex problems, all of which require greater and greater computational power.
Powerful quantum computers could potentially fulfill that need for more computing capacity, but current state-of-the-art quantum computers are still relatively error-prone, due to the inherent sensitivity of quantum bits, or qubits, to outside environmental perturbations like temperature fluctuations or errant electromagnetic fields.
Nevertheless, that hasn’t stopped tech companies like IBM, Amazon, Honeywell, and ColdQuanta from building quantum computers or offering quantum computing services.
Tech giant Google is yet another company that has joined the ongoing race to commercialize quantum computing on a larger scale, having recently built a 54-qubit machine called Sycamore.
It was the Sycamore quantum processor that allowed Google to claim that it had achieved a breakthrough known as quantum supremacy , meaning that it was able to solve in mere minutes an extremely difficult computational problem, one that would have taken the world’s fastest “classical” supercomputer thousands of years to figure out.
Now, Google researchers have demonstrated that it’s also possible to exponentially suppress errors on their quantum machine. It’s a significant finding that will help other experts in developing more fault-tolerant quantum computers that can also automatically detect and correct errors.
Current state-of-the-art quantum computers typically have an error rate of near 10−3 (ie. one in a thousand). However, for quantum computers to reach their full potential, that error rate would need to be lowered to at least 10−15, which is widely considered to be practical for most applications.
Exponential Error Correction
In a paper published in Nature, the Google team outlines how they developed some new techniques for performing quantum error correction (QEC). Much like how classical computers might add on a parity or “check” bit to a string of binary code as a simple form of error correction, quantum computers need some way to protect fragile quantum information from errors, or from quantum noise that might arise from when qubits are inadvertently affected by the environment outside the machine.
That’s where quantum error correction comes in, but it’s a bit more complicated with quantum computers. That’s because data isn’t being encoded in binary 1s and 0s as it would be on a classical machine; instead quantum bits encode information in both states, making error correction in a quantum computer much more tricky.
To tackle this problem, the Google researchers used what is known as stabilizer codes. Like the error-checking parity bit in conventional computing systems, stabilizer codes help to compensate for high error rates in their quantum cousins.
In this case, the Google researchers used two kinds of stabilizer codes: one called a “repetition code” and another called a “surface code.” These codes help to distribute quantum information across many qubits and to designate additional qubits that are used to track parity, and to correct errors if they occur.
Because quantum bits are susceptible to decoherence, many more additional physical qubits are needed for error correction, and grouping these error-correcting qubits together as one entity will form clusters known as “logical qubits,” which help to provide stability and fault tolerance to the overall system.
“Many quantum error-correction architectures are built on stabilizer codes, where logical qubits are encoded in the joint state of multiple physical qubits, which we refer to as ‘data qubits,'” explained the team.
“Additional physical qubits known as ‘measure qubits’ are interlaced with the data qubits and are used to periodically measure the parity of chosen data qubit combinations. These projective stabilizer measurements turn undesired perturbations of the data qubit states into discrete errors, which we track by looking for changes in parity. The stream of parity values can then be decoded to determine the most likely physical errors that occurred.”

A) Error detection event graph. B) Ordering of the “measure qubits” in the repetition code. C) Measured correlations between detection events. D) Top: observed high-energy event during running of repetition code. Bottom: zoom-in on high-energy event. There is a rapid rise and exponential decay of devicewide errors and data that are removed when computing logical error probabilities.
To conduct the experiment, the qubits in the system were arranged in a one-dimensional chain so that each qubit had a maximum of two adjacent qubits. The repetition code controlled the qubits in the chain by having data qubits alternate between functioning as data qubits and as measure qubits that checked for any errors in their neighbors.
Interestingly, the team found that when the size of the logical qubit clusters, which ranged from 5 to 21 physical qubits, was increased, it exponentially reduced logical error rates. With a 21-qubit cluster, error rates were diminished by 100 times compared to a logical qubit made up of only five physical qubits.
“Errors on the encoded logical qubit state can be exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold and stable over the course of a computation,” said the team.
Because there are different types of quantum errors that can occur — either a space-like error, a time-like error or a spacetime-like error — the team also implemented another type of stabilizer code known as a “surface code” to check for errors. This code arranges the qubits in a two-dimensional checkerboard configuration of data qubits and measure qubits to better monitor any additional logical errors that might crop up.
Even more importantly, the team found that their method maintained the stability of the logical error suppression rate, even after testing for 50 rounds. This is the team’s key finding, as this could point the way toward developing more fault-tolerant, large-scale quantum computers in the future.
Nevertheless, the team notes that there are limitations to their model, most notably that for a future quantum computer to perform practically, the error rate of qubits would need to be reduced by a factor of at least 10, while the size of logical qubits would have to expand to about 1,000 data qubits. As the team notes, this would be an important threshold for quantum computing in the future.
Currently, however, they are not able to test that proposal, as even Sycamore’s state-of-the-art quantum processor only has 54 qubits, though the company says it is aiming to build a commerical-grade 1,000,000-qubit processor by 2029.