IBM Research hopes to achieve "quantum advantage" over traditional computing by 2023.
IBM used its annual Quantum Summit this week to unveil the 127-qubit Eagle processor, a design it hopes will lay the groundwork for practical quantum computing.
As outlined in its hardware roadmap, IBM aims to reach “quantum advantage,” the point where quantum computers are either cheaper, faster or more accurate than classical computers at the same relevant task by 2023.
Quantum computers employ qubits to represent information in quantum form. Unlike data represented by a traditional computer that operates in bits, quantum data may be both a “1” or “0” simultaneously. In theory, quantum computers can perform certain calculations much faster and more efficiently than digital computers. IBM and other companies such as Google and Microsoft are working to scale towards the quantum tipping point.
In an interview, Bob Sutor, IBM’s chief quantum exponent, said breaking the 100-qubit barrier underscores the necessary scaling of quantum architectures. “We borrowed techniques from traditional CMOS semiconductor manufacturing so we could access the many qubits in the middle of the chip as well as the ones on the outside edges,” added Sutor. The 127-qubit design was achieved using multiple hexagons that enabled several extra qubits on the outside. The design also paves the way for future fault-tolerant systems.
IBM expects to unveil its 433-qubit Osprey processor next year. Condor, a 1,121-qubit processor due in 2023, would enable tasks like error correction.
“Everyone always talks about the number of qubits, which is definitely important for running more complex calculations,” noted Darío Gil, director of IBM Research. “But qubit count scale is just one facet of the way we measure a quantum processor’s performance.”
Gil said Eagle would allow IBM researchers to focus on three quantum performance metrics: scale, quality and speed. “For improved quality, Eagle uses the latest advances in qubit fabrication control electronics and software that will help us maximize its quantum volume.
For increased speed, Eagle will “integrate with classical computing workflows,” Gil added. “Using Qiskit Runtime and other improvements to maximize the number of quantum circuits it can run per second, it’s important to continue to develop the software systems to match the hardware advances.”
IBM plans to develop new circuit libraries tailored to applications such as finance, machine learning and chemistry.
Eagle is also promoted as foundational for future quantum computing advances. For example, each extra qubit doubles the amount of space complexity, that is, the amount of memory space available to run quantum algorithms. “The mathematics of quantum mechanics is very clearly multi-dimensional,” Gil said. “This abstraction in mathematics is how nature operates. It sounds like science fiction, but it’s not, it’s just science.”
For decades, scientists have hypothesized about computers based on the same mathematics and physics used to model the behavior of subatomic particles. Quantum mechanics describes everything around us, surpassing traditional computers in providing superior simulations in various domains. Among the challenges to building quantum computers is noise: Even the tiniest external noise can cause qubits to decohere.
“To get good performance, we need to balance having many qubits, with improved coherence time and gate fidelity quality, and running fast enough to make quantum computing practical,” Sutor said. “If any of these three are poor or too small, then the quantum computing done with the system will not be useful.”
IBM’s 433-qubit Osprey processor “will continue to develop the Qiskit open source software development platform with the community, and we will be working with customers and partners to get us closer to useful and breakthrough industry applications of quantum computing.”
As quantum computing scales, the focus is shifting to real-world applications. If quantum scaling is measured by the number of qubits and quality by quantum volume, then quantum processing speed is a measure of the useful work those qubits can perform in a reasonable time. IBM defines this metric as the number of “quantum circuit layers” that can be processed per second. Similar to floating-point operations per second in classical computing, improving QPU speed is critical to practical quantum computing.
Useful quantum computing requires running as many circuits as possible, with some applications requiring more than 1 billion. At the lowest level, QPU speed is driven by the underlying architecture. That’s among the reasons IBM chose superconducting qubits, enabling easier coupling of qubits to the resonators and processors. The result is faster gates, resets and readout fundamentals.
“Though we call it quantum computing,” Sutor noted, “it is really an integration of new quantum processors and control units with classical technology. Since we already understand the classical parts, that gives us a tremendous advantage.”
Eagle incorporates nearly twice as many qubits as the 65-qubit Hummingbird processor. According to IBM, techniques developed in previous quantum processors had to be combined and improved in order to develop a processor architecture that utilizes advanced 3D packaging techniques. That approach can serve as the foundation for the 1,000-plus-qubit Condor processor.
Eagle is based on the earlier Falcon processor’s “heavy hexagon” qubit architecture, in which qubits are connected to two or three neighbors as if situated on the edges and corners of tessellated hexagons. Those connections decreased potential errors caused by interactions between neighboring qubits. Eagle also incorporates readout multiplexing, reducing the amount of electronics and wiring required within the dilution cooling.
According to IBM, 3D integration allows specific microwave circuit components and wiring to be placed on different physical levels. While qubit packaging remains among of the biggest challenges in designing future quantum computers, multi-level wiring and other components would help increase quit counts in future QPUs.
Indeed, IBM’s latest QPU resembles a pair of multi-layer chips; the Josephson junction-based superconducting connection resides on one chip, and is attached to a separate interposer chip through bumper bonds. This interposer chip provides key connections to qubits via standard CMOS packaging techniques, including substrate vias and a buried wiring layer–a unique use of that technique.
Also introduced this week was the IBM Quantum System Two, a prototype quantum system aimed at data centers. IBM touts the platform’s modular design, allowing hardware scaling as needed. System Two includes cryogenic components and higher-density wiring along with a new scalable qubit control circuits. IBM and Bluefors are collaborating to redesign cryogenics. The result should provide a larger cryogenic footprint, connecting quantum processors via new interconnects or dispersed specialized cooling zones.
Gil said System Two represents another step toward using quantum hardware as a complement to traditional processing of data center workloads, laying the groundwork for the ecosystem needed for wider adoption of quantum computing.
This article was originally published on EE Times.
Maurizio Di Paolo Emilio holds a Ph.D. in Physics and is a telecommunication engineer and journalist. He has worked on various international projects in the field of gravitational wave research. He collaborates with research institutions to design data acquisition and control systems for space applications. He is the author of several books published by Springer, as well as numerous scientific and technical publications on electronics design.