Making Quantum Error Correction practical
Last year we saw a wide range of major announcements highlighting demonstrations of Quantum Error Correction (QEC) to achieve what is known as fault-tolerant quantum computing. Most notably, an experiment from Harvard triggered speculation that we’ve entered the regime of logically encoded quantum computing.
Here we set out to help you understand what this means and to provide real insights into where we go from here - no PhD required.
Quantum computers suffer from an Achilles’ heel that is simply not present in conventional computers: error-prone hardware. These errors arise in the underlying quantum information carriers called qubits—in times much shorter than a second. Hardware failure is almost unheard of in conventional “classical” computing where processors can run for approximately one billion years continuously before a transistor fails.
This enormous gap between what stable classical computers can achieve and the faulty error-prone quantum computers we have today has led to a huge amount of research investment and attention.
Here at Q-CTRL we specialize in stabilizing quantum hardware against errors so users can dramatically expand what’s possible on a real device. For instance, with 127 qubits we’ve shown that using our infrastructure software you can push IBM hardware right to its intrinsic limits, enabling record-setting performance across a wide range of benchmarks.
But the hardware still has limits! What do we do to break through those?
Understanding Quantum Error Correction
QEC is an algorithm designed to identify and fix errors in quantum computers. In combination with the theory of fault-tolerant quantum computing, QEC suggests that engineers can in principle build an arbitrarily large quantum computer that if operated correctly would be capable of arbitrarily long computations (so long as some basic conditions are met - more on that later).
This would be a stunningly powerful achievement. The prospect that it can be realized underpins the entire field of quantum computer science: Replace all quantum computing hardware with “logical" qubits running QEC, and even the most complex and valuable algorithms come into reach. For instance, Shor's algorithm could be deployed to crack codes or hack Bitcoin with just a few thousand error-corrected logical qubits. On its face, that doesn't seem far from the 1,000+ qubit machines promised by 2023. (Spoiler alert: this is the wrong way to interpret these numbers).
In a cartoon depiction, QEC involves “smearing” the information in one physical qubit over many hardware devices, encoding to form what’s known as a logical qubit. One logical qubit carries the same information as a single physical qubit, but now if QEC is run in the right way any failures of the constituent hardware units can be identified and fixed while preserving the stored quantum information! While it sounds futuristic, it actually draws from validated mathematical approaches used to engineer special “radiation hardened" classical microprocessors deployed in space or other extreme environments where hardware errors are much more likely to occur.
QEC is real and has seen many partial demonstrations in laboratories around the world—initial steps that make it clear it's a viable approach. And recently, a beautiful experiment demonstrated that many logically encoded qubits could be run and made to interact in order to execute a full algorithm on logical qubits - an astounding demonstration!
Quantum Error Correction tends to introduce more error than correction
So, is this the end of the story? Do we just rinse and repeat, making more logical qubits in order to build arbitrarily large machines that are resilient against errors?
Not quite. There’s one major problem: right now QEC does not, in general, make quantum computers perform better. In fact, but for a handful of beautiful demonstrations it almost always makes things worse in the experiments published to date.
If we’re now able to run QEC, why aren’t things just getting better and better? The algorithm by which QEC is performed itself consumes resources—more qubits and many operations; the amount of extra work that must be done to apply QEC tends to introduce more error than correction.
The point at which the improvement balances the new added errors is generically called the error threshold (a measure of the bare hardware performance). Clearly we want our hardware to operate below (better than) the error threshold before running QEC. And many hardware teams are getting close to this breakeven point - so will that finally enable arbitrarily large quantum computations?
Even crossing this break-even point and achieving functioning QEC doesn't mean we suddenly enter an era with no hardware errors—it just means QEC actually gives some net benefit! There’s a huge difference between “some net benefit” and enough error reduction to run a large and complex algorithm with high likelihood of success.
Making Quantum Error Correction practically useful
In order to make QEC practically useful we actually have to first reduce errors well below the error threshold. It turns out that the power of QEC grows as the hardware performance gets better - a truly virtuous circle!
We need to ask how far below the error threshold we need to be for QEC to deliver practical benefits in applications we care about.
According to some calculations, in order to run a large high-value algorithm like Shor’s algorithm for codebreaking, if your error is just barely below the threshold you may need a ratio of >100,000:1 – over 100,000 physical qubits to encode just one logical qubit. Returning to the idea of running Shor’s algorithm and a few thousand logical qubits suddenly implies a few hundred million physical devices! Compare that number to the recent pronouncements of 127-qubit utility scale machines from IBM and you see just how painful the notion is.
But pushing the hardware well below the error threshold can reduce this overhead penalty massively! In short, if we follow the math it looks like we can reduce this number from >100,000:1 → ~50:1 if we make the hardware errors better than 1,000x lower than the threshold. You can learn more about this detailed example from my recent Keynote at IEEE Quantum Week.
The notion of simply throwing QEC at an error-prone processor and hoping it will deliver improvements is a bit like trying to cool a room by blasting the air conditioning while leaving all of the doors and windows open. You might feel the cool air, but it’s exceptionally inefficient and takes far too long to actually lower the temperature in the room.
Combining error suppression with QEC closes all of the windows and doors, and adds insulation to the room first. Then running the air conditioning for a short while quickly gets you to a comfortable temperature!
Success stories from the community
Better than this analogy is the evidence we already have in hand. Three separate manuscripts from Nord Quantique, the University of Sydney, and Q-CTRL published in 2023 show how the combination of our error suppression tools with QEC encoding make QEC perform better – whether in the quality of logical encoding or QEC’s ability to actually identify and correct errors.
In Nord Quantique’s work, the inclusion of our technology for hardware optimization meant the difference between achieving only breakeven performance with QEC or actually achieving a net enhancement in logical qubit lifetime!
Even teams working independently are implicitly using error suppression in their most advanced QEC demos because they know it’s essential to push the hardware to its limits as a first step. Google integrated the core technique of dynamic decoupling into the execution of their impressive QEC experiments with large codes. And any time an experimentalist uses “DRAG” or a similar pulse-shaping technique at the gate level, they’re really combining error suppression with QEC!
Combining error suppression with QEC appears to be the only viable path forward for three reasons:
- We need the hardware to perform at its intrinsic limits in order to get far below the error threshold, and no matter how good the hardware may be, imperfections in all of the classical signals sent to the hardware can always cause errors that need cleaning up.
- We know that the action of error suppression improves the compatibility between the typical errors experienced in hardware and the mathematical assumptions behind QEC.
- Alternative techniques like error mitigation – which have shown promise in the NISQ era by post processing imperfect results – appear largely incompatible with the “single-shot” approach to real time QEC.
Moving forward to an error-corrected quantum future
We’re thrilled to be working with the hardware manufacturers and the QEC encoding teams to deliver a future in which QEC is both beneficial and practically relevant. And because of the issues identified above, there’s work to be done by the entire community to make this real.
Achieving practical QEC ultimately requires four areas to progress in parallel:
- Intrinsic hardware performance
- Error suppression in hardware operation
- QEC encoding and protocols
- Repetitive execution in real time
Most of the media attention has been on the third point, but without the others, error correction will remain the focus of science rather than practical utility. At Q-CTRL, where we aim to make quantum technology useful, we’re working to ensure everything needed for practical QEC is delivered!
Get in touch to learn more about how our professional quantum EDA tools for error suppression or fully integrated performance management can help you accelerate the path to real and useful Quantum Error Correction!