Quantum computers are gaining traction across fields from logistics to finance. But as most users know, they remain research experiments, limited by imperfections in hardware. Today’s machines tend to suffer hardware failures—errors in the underlying quantum information carriers called qubits—in times much shorter than a second. Compare that with the approximately one billion years of continuous operation before a transistor in a conventional computer fails, and it becomes obvious that we have a long way to go.
Companies building quantum computers like IBM and Google have highlighted that their roadmaps include using “quantum error correction” to achieve what is known as fault-tolerant quantum computing as they scale to machines with 1,000 or more qubits.
Here we’ll help you understand what quantum error correction is and what’s required to bring it to reality.
What is quantum error correction?
Quantum error correction is an algorithm designed to identify and fix errors in quantum computers. It can draw from validated mathematical approaches used to engineer special “radiation-hardened” classical microprocessors deployed in space or other extreme environments where errors are much more likely to occur.
Quantum error correction is real and has seen many partial demonstrations in laboratories around the world—initial steps that make it clear it’s a viable approach. 2021 may just be the year when it is convincingly demonstrated to give a net benefit in real quantum-computing hardware.
Unfortunately, with corporate roadmaps and complex scientific literature highlighting an increasing number of relevant experiments, an emerging narrative is falsely painting quantum error correction as a future panacea for the life-threatening ills of quantum computing.
In combination with the theory of fault-tolerant quantum computing, Quantum error correction suggests that engineers can, in principle, build an arbitrarily large quantum computer that would be capable of arbitrarily long computations if operated correctly. This would be a stunningly powerful achievement. The prospect that it can be realized underpins the entire field of quantum computer science: Replace all quantum computing hardware with “logical” qubits running quantum error correction, and even the most complex algorithms come into reach. For instance, Shor’s algorithm could be deployed to render Bitcoin insecure with just a few thousand error-corrected logical qubits. On its face, that doesn’t seem far from the 1,000+ qubit machines promised by 2023. (Spoiler alert: this is the wrong way to interpret these numbers).
What challenges are posed by quantum error correction?
The challenge comes when we look at the implementation of quantum error correction in practice. The algorithm by which quantum error correction is performed itself consumes resources—more qubits and many operations.
Returning to the promise of 1,000-qubit machines in the industry, so many resources might be required that those 1,000 qubits yield only, say, 5 useful logical qubits.
Even worse, the amount of extra work that must be done to apply quantum error correction currently introduces more error than correction. Quantum error correction research has made great strides from the earliest efforts in the late 1990s, introducing mathematical tricks that relax the associated overheads or enable computations on logical qubits to be conducted more easily, without interfering with the computations being performed. And the gains have been enormous, bringing the break-even point, where it’s actually better to perform quantum error correction than not, at least 1,000 times closer than original predictions. Still, the most advanced experimental demonstrations show it’s at least 10 times better to do nothing than to apply quantum error correction in most cases.
This is why a major public-sector research program run by the U.S. intelligence community has spent the last four years seeking to finally cross the break-even point in experimental hardware for just one logical qubit. We may well unambiguously achieve this goal in 2021—but that’s the beginning of the journey, not the end.
Crossing the break-even point and achieving useful, functioning quantum error correction doesn’t mean we suddenly enter an era with no hardware errors—it just means we’ll have fewer. Quantum error correction only totally suppresses errors if we dedicate infinite resources to the process, an obviously untenable proposition. Moreover, even forgetting those theoretical limits, quantum error correction is imperfect and relies on many assumptions about the properties of the errors it’s tasked with correcting. Small deviations from these mathematical models (which happen in real labs) can further reduce quantum error correction’s effectiveness.
Instead of thinking of quantum error correction as a single medicine capable of curing everything that goes wrong in a quantum computer, we should instead consider it an important part of a drug cocktail.
Combining quantum error correction with low-level quantum control is key
As special as quantum error correction is for abstract quantum computing mathematically, in practice it’s really just a form of what’s known as feedback stabilization. Feedback is the same well-studied technique used to regulate your speed while driving with cruise control or to keep walking robots from tipping over. This realization opens new opportunities to attack the problem of error in quantum computing holistically and may ultimately help us move closer to what we actually want: real quantum computers with far fewer errors.
Fortunately, there are signs that within the research community a view to practicality is emerging. For instance, there is greater emphasis on approximate approaches to quantum error correction that help deal with the most nefarious errors in a particular system, at the expense of being a bit less effective for others.
The combination of hardware-level, open-loop quantum control with feedback-based quantum error correction may also be particularly effective. Quantum control permits a form of “error virtualization” in which the overall properties of the hardware with respect to errors are transformed before implementation of quantum error correction encoding. These include reduced overall error rates, better error uniformity between devices, better hardware stability against slow variations, and a greater compatibility of the error statistics with the assumptions of quantum error correction.
Each of these benefits can reduce the resource overheads needed to implement quantum error correction efficiently. Such a holistic view of the problem of error in quantum computing—from quantum control at the hardware level through to algorithmic quantum error correction encoding—can improve net quantum computational performance with fixed hardware resources.
Q-CTRL’s specialization in quantum control -- from novel forms of machine-learning-assisted feedback stabilization through to open-loop control -- positions us perfectly to deliver on this vision.
What is the future of quantum error correction?
None of this discussion means that quantum error correction is somehow unimportant for quantum computing. And there will always remain a central role for exploratory research into the mathematics of quantum error correction, because you never know what a clever colleague might discover. Still, a drive to practical outcomes might even lead us to totally abandon the abstract notion of fault-tolerant quantum computing and replace it with something more like fault-tolerant-enough quantum computing. That might be just what the doctor ordered.
This article originally published as “Quantum Computer Error Correction Is Getting Practical” in IEEE Spectrum