New Protocol Allows ‘Noisy’ Quantum Computers to Auto-Assess Their Accuracy

In the last few years, the promise of quantum computing has been gradually infiltrating our collective consciousness. Experts envision that these faster and much more powerful computers will accomplish what would be beyond the capabilities of conventional, or so-called “classical” computers to achieve — such as modeling complex systems or building unhackable networks.
The mighty potential of quantum computers lies in the fact that their computations leverage the ability of subatomic particles to exist in more than one state simultaneously, as opposed to the more limited, binary-based computations performed by classical computers. This means that compared to conventional computers, quantum computers are able to perform complex calculations faster and much more effectively. While the notion of quantum computing has been for more than three decades, there are still hurdles to overcome before it can be more widely implemented, such as developing methods to ensure that the answers that quantum computers give are actually correct.
Quantum Credibility
A team of researchers from the University of Warwick, England, recently developed such a technique, which allows for a quantum computer to self-test itself, without having to rely on less-efficient classical supercomputers to cross-check results, or having to significantly increase the number of quantum bits or “qubits” required. This means that this new protocol requires much less “overhead” in terms of computational resources, making it much more practical in larger-scaled applications.
“If quantum computers are to be used in solving hard computational problems, such as designing new chemicals, medicines, and materials, we must have a way of deciding if their outputs are credible or not,” explained Animesh Datta, an assistant professor in theoretical physics at University of Warwick and one of co-authors of the paper, recently published in the New Journal of Physics.
“Without this assurance, we can never use quantum computers for anything crucial. Our work shows that this assurance can be obtained. In other words, it gives a way of deciding whether quantum computers are credible enough for crucial applications.”
One of the main aspects of the team’s novel accreditation protocol is that it gets a quantum computer to run several simple calculations, to which the answer is already known, in addition to the rest of the operation. By determining how correct the quantum computer is in regard to these easier calculations, users can then also find out how close or how far off the quantum computer is from the correct answer of the overall operation that is being run. The approach is inspired by a similar method where software programmers insert small mathematical functions — to which the answers are known — into massive computer programs. If the program answers most of these functions correctly, then users can be assured that it’s very likely the program is functioning properly as well.
In addition, the protocol quantifies what is known as “noise” — specifically, the variations in output that is produced by factors like temperature fluctuations, which can affect a quantum computer’s sensitive hardware. The verification protocol generates two percentages — one that estimates how close the quantum computer is to the actual result, and how sure the user can be of that estimate.
“The challenge lies in choosing these simple calculations,” said Datta. “They must be chosen so as to test all parts of the quantum computer and catch most of its defects (often called ’noise’). If not, we may get correct outputs for the simple calculations, but be incorrect for the hard calculations. The simple calculations we pick are called Clifford circuits. They have been known for about 20 years.”
Verifying the Quantum Cloud
Beyond their accreditation protocol, the team’s paper also proposes what they call a “mesothetic verification protocol,” which will likely be useful in the future when quantum computers are more widespread.
“The mesothetic verification protocol is an extension of our accreditation techniques to a potential client-server quantum network application,” explained paper co-author Theodoros Kapourniotis, a research fellow in the University of Warwick’s Quantum Information Science Group. “There, the client periodically receives the qubits of the server and applies some special operations that effectively test the honesty of the server in providing answers. We don’t think that this exchange of quantum information is practical with current technology, but it will definitely be relevant when we eventually enter the era of quantum networks and the ‘quantum cloud.'”
The team is now working to further develop their accreditation protocol so that incorporates quantum error correction — a kind of fault tolerance that would protect quantum information from noise, defective quantum gates or faulty measurements. This is especially important as larger industry players like IBM and Google are now competing to establish what is known as quantum supremacy, where a quantum device succeeds in solving a problem that a classical computer cannot in practical terms (i.e. it would take much too long). In such situations, the accuracy of such quantum devices would be critical for demonstrating quantum supremacy.
“With the current rate of investment in quantum technologies, and the development of schemes like ours to accredit them, we are optimistic that quantum computers will soon be in a position to solve very useful problems,” said Kapourniotis. “The original aspiration of quantum computing, which still remains, is the ability to simulate systems that rely on quantum mechanics themselves. This understanding of complex physical systems at the basic molecular level is the hurdle we have to overcome, to solve pressing problems for the future of the planet, such as solar energy harvesting.”
Read more in the team’s paper.
Image: Lars Plougmann (CC BY-SA 2.0)