TORONTO — It's always been assumed that quantum computing is better — at least to the layperson. But IBM research scientists have now actually proven mathematically that quantum computing is faster than a classical computer for certain problems.

The critical word, however, is “certain.” In a telephone interview with EE Times, Bob Sutor, vice president of IBM Q Ecosystem and Strategy, said this mathematical proof demonstrates concretely the difference between certain types of computations that can be done with a quantum computer versus a classical computer.

“And it says that quantum computers will be faster — that the behavior or what you're trying to do on a quantum computer is different enough because of the innate properties of quantum mechanics and quantum computing that it will have this type of significant advantage.”

Sutor said this proof is an extremely important milestone, as it will be a foundation for building the rest of the formal structure of quantum computers — how they are coded, built, and what choices are made around algorithms and how they are applied. But it will also provide guidance on when quantum computing might be the option, or whether classical computing will still be sufficient, he said.

The mathematical proof is outlined in the Science article as “Quantum advantage with shallow circuits” by Drs. Sergey Bravyi of IBM Research, David Gosset of the University of Waterloo's Institute for Quantum Computing, and Robert König of the Institute for Advanced Study and Zentrum Mathematik, Technische Universität München.

IBM scientists haven proven mathematically there are certain problems that require only a fixed circuit depth when done on a quantum computer, no matter how the number of inputs increases, whereas a classical the same problems need the circuit depth to grow larger as inputs increase.
IBM scientists haven proven mathematically there are certain problems that require only a fixed circuit depth when done on a quantum computer, no matter how the number of inputs increases, whereas a classical the same problems need the circuit depth to grow larger as inputs increase.

To understand the significance of the proof, it's important to understand the basic computational unit in quantum computing is a quantum bit — a qubit, which unlike a classical bit that is always 0 or 1, can take on many other additional values. The potential computational power doubles every time a qubit is added through entanglement, and the qubits together with the operations applied to them are called a circuit.

Qubits aren't perfect in that they have small error rates and only exist for a certain length of time before they become chaotic. This is known as the coherence time, and it means there only so many operations that can be done before the time limit is reached. The number of operations performed is depth, and the overall depth of a quantum circuit is the minimum of all the depths per qubit. The math proves that certain problems need only a fixed circuit depth when done on a quantum computer, no matter how the number of inputs increases, whereas a classical computer requires the circuit depth to grow larger as inputs increase for the same problem.

This limited depth means IBM scientists are most interested in what can be done with short-depth circuits because they are practical for implementing quantum algorithms and demonstrating quantum computing has an advantage over a classical approach. The mathematical proof shows that fault tolerant quantum computers will do some things better than classical computers can — not all. “It's a little subtle I think sometimes,” said Sutor. “But that's a very important distinction.”

The proof is a first foundational step, and it's important to manage expectations — we're in the very early days of quantum computing, he said. “In practical terms, we have 50 qubits. That's our largest prototype right now.”

And one major myth Sutor wants to dispel is the “quantum crypto apocalypse” that would break encryption on the web, which would in fact require on the order 100 million cubits. “There's several miracles between here and there,” Sutor said. 

—Gary Hilson is a general contributing editor with a focus on memory and flash technologies for EE Times