Beginner

Building a quantum implementation roadmap with the arrival of Quantum Error Correction

A simple guide to help IT leaders in enterprise, government, and research understand this technology and build a foundation for the future.

Excitement about the era of Quantum Error Correction (QEC) is reaching a fever pitch. This has been a topic under development for many years by academics and government agencies as QEC is a foundational concept in quantum computing.

More recently, industry roadmaps have not only openly embraced QEC, but hardware teams have also started to show convincing demonstrations that it can really be implemented to address the fundamental roadblock for quantum computing - hardware noise and error. This rapid progress has upended notions that the sector could be stagnating in so-called NISQ era, and reset expectations among observers.

As you look ahead and plan for the adoption of quantum computing in your sector, how do these dynamics impact you?

If you’re seeking a foundational understanding of when and how QEC will matter to you, we have prepared a simple overview for IT leaders in enterprise, government, and research to help them build their quantum implementation roadmaps over the next decade. We’ve focused on adding the nuance that’s normally missing in order to equip you with a real competitive advantage in how you plan for the future.

Understanding Quantum Error Correction

Quantum computers suffer from an Achilles’ heel that is simply not present in conventional computers: error-prone hardware. These errors arise in the underlying quantum information carriers called qubits—in times typically much shorter than a second. By contrast, hardware failure is almost unheard of in conventional “classical” computing where processors can run for approximately one billion years continuously before a transistor fails.

Thinking about how to deal with faulty hardware has to be top of mind for any early adopter of quantum computing.

Fortunately there are well understood pathways forward. Aside from the performance-management infrastructure software that Q-CTRL recently deployed into IBM Quantum to handle errors right now (leveraging techniques called error suppression), there’s also a long-term play which has been gaining a lot of attention of late - Quantum Error Correction (QEC).

QEC is an algorithm designed to identify and fix errors in quantum computers. In combination with the theory of fault-tolerant quantum computing, the tenets of QEC suggest that engineers can in principle build an arbitrarily large quantum computer that if operated correctly would be capable of arbitrarily long computations. It’s a foundational concept in quantum information science that goes all the way back to the field’s inception.

In a cartoon depiction, QEC involves “smearing” the information in one physical qubit over many hardware devices, encoding to form what’s known as a logical qubit. One logical qubit carries the same information as a single physical qubit, but now if QEC is run in the right way any failures of the constituent hardware units can be identified and fixed while preserving the stored quantum information! Note that for technical reasons “encoding” is not as simple as duplicating the data - a logical qubit is really a single unit carrying just one qubit-worth of information.

To process using QEC, “just” replace all qubits in a quantum computer with encoded logical qubits running the iterative error detection and correction algorithm, and even the most complex and valuable algorithms can (in principle) come into reach. A major step in this direction happened recently (building on many previous demonstrations) whereby a full quantum algorithm was run on several dozen logically encoded qubits!

Of course there’s no free lunch. Adding all of those additional operations and physical devices to execute QEC actually introduces many more opportunities for error. In order for QEC to deliver a real net benefit we need to push the hardware very hard to ensure we surpass the breakeven point.

Taking this approach also means that for a fixed quantum processor size you have fewer effective qubits to work with. In the real-world of finite hardware resources you can’t just “replace” physical qubits with logical ones - instead you have to partition the devices you have into logical blocks, where most devices really serve little more than the process of QEC.

In generous estimates, ~90% of the qubits available in a quantum processor are dedicated to error correction rather than straightforward processing (~10 physical qubits per logical qubit)… and in challenging limits 99.9% of qubits are tied up! If things are working well, then as system sizes grow the benefits achieved in reducing errors outweigh this overhead penalty.

If you’d like to understand more about the fundamentals of QEC, have a look at our overview of the topic in which we explained more about the fundamentals as well as the path forward for QEC to deliver on its full potential.

Where are we now?

In order to chart a path forward, you need to know where you’re starting. To start with a simple statement: right now QEC does not, in general, make quantum computers perform better. In fact, but for a handful of beautiful demonstrations it almost always makes things worse.

Progress has been incredible. QEC has seen a raft of impressive demonstrations since 2021 from teams pushing the frontiers of what’s possible. This includes validation of most of the underlying concepts (e.g. how to encode and validating the math showing that more complex encoding leads to better error correction), the ability to repeatedly identify errors and perform correction, the ability to improve QEC’s performance by combining with error suppression, and even the ability to deliver net improvements to logical qubit lifetime or the quality of quantum logic operations performed on logical qubits. The recent Harvard experiments even showed that it’s possible to run whole algorithms on many encoded logical qubits.

As of early 2024 we have still not seen all of the pieces put together with a net benefit delivered. Understandably, most scientific experiments are structured in a way to elucidate certain aspects of the QEC process. None of them execute QEC in the generic way that will ultimately be required - autonomous, iterative, repeated error identification and correction enabling any logical circuit to be run.

We are so close - still there remains work to do before we convincingly make the transition out of the NISQ era. With that status check, it’s time to look forward, because the future is very exciting.

A practical roadmap for the arrival of Quantum Error Correction

With recent demonstrations and forward-looking industry roadmaps anticipating major progress in the rollout of QEC in quantum processors, how should you plan for the use of QEC in your quantum-computing implementation roadmap? Is adopting processing using QEC a need right now or is it a future concern? And how do you even begin to answer that?

When it comes to thinking about the relevance of QEC many have rather naively assumed adoption will be binary - available or not, on or off. That simplistic view is not good enough because it misses what actually matters - delivering value to end-users for the problems they care about.

Remember that QEC consumes lots of resources and itself actually introduces a lot of new error. That means turning QEC “on” doesn’t instantly deliver error-free quantum computers - far from it.

In the evolution of QEC, the next most important step is to achieve performance beyond breakeven while executing the entire QEC process. In that regime QEC actually starts helping consistently.

But it may not help very much; errors will still build up and eventually cause failure, perhaps just at a slightly slower rate than doing nothing. More importantly the reduction in errors achieved with QEC may not be as big as that you can achieve using other techniques that are already in use. That is, QEC may be an inferior approach to choose, even when it helps!

We therefore need to define something new – QEC advantage – the point at which the entire resource-intensive process of QEC delivers net computational capability and performance better than what could be achieved through simpler means like error suppression or error mitigation on bare, unencoded qubits.

With these considerations in mind we can reframe the roadmap for the practical rollout of QEC:

  • NISQ Regime: QEC is not yet broadly available, though key components may be demonstrated. QEC is generally a research domain and resource overheads are impractical without major improvements in baseline hardware performance.
    Impact: Users employ unencoded “bare” physical qubits to run algorithms.
  • QEC Beyond Breakeven: QEC delivers some net benefit when applied, but benefits are not as large as alternate techniques that do not require encoding. Overhead levels in implementing QEC remain high due to QEC inefficiency, especially as hardware size will often be a bottleneck. QEC remains a development area and may appear as a test capability in commercial platforms.
    Impact: Users will continue to preferentially choose to execute algorithms on unencoded bare physical qubits in order to maximize the computational capability of the quantum processors they use.
  • The QEC Advantage Regime: QEC delivers substantial improvement beyond alternate techniques and resource requirements shrink, making its execution preferred for running algorithms. Meanwhile hardware sizes have increased to the point where the resources dedicated to executing QEC leave a sufficiently large number of logical qubits to run meaningful algorithms.
    Impact: This is the era of “logical QC” and users will likely prefer to execute using logically encoded qubits given the choice, subject to hardware-size constraints.
  • The Fault-Tolerant Regime: QEC efficiently delivers very large benefits, and enables large-scale quantum computing. QEC is essential at this stage and broadly used commercially, combined with background error-suppression processes.
    Impact: users only engage with abstracted encoded logical qubits running QEC.
Using published hardware-vendor roadmaps and historical data on rates of progress we estimate the arrival of each regime in the evolution of QEC and approximate the relevant regime boundaries based on system capability. Specific dates are subject to change based on novel approaches introduced by vendors and should be taken as indicative only.

The taxonomy above moves beyond previous simplified descriptions of either “NISQ” or “Fault tolerant quantum computing” which we see here are extremal cases (today we’re right on the border of NISQ and QEC Beyond Breakeven).

What happens in between the simplified extremes is what matters most to end users building their own implementation roadmaps for quantum-computing adoption.

The categories we have introduced bear similarity to designations used previously by GQI in identifying the continuous transition from NISQ to fault-tolerant quantum computing. Taken together, they clearly outline the recognition that there is not likely to be an abrupt adoption of error-corrected devices in real settings, and highlight challenges and risks for both end-users and sector participants in this process.

The quantum community has debated at length the details of the threshold of “Quantum advantage,” the point at which a quantum application is preferred for an economically relevant problem. We’ve sought justification for why a quantum computer is better than the best classical alternative. This is a first attempt to bring a similar level of rigor to discussions about QEC adoption in quantum computing roadmaps.

Measuring quantum processing unit performance to capture value

When we think about capturing and delivering value we clearly need to focus on what matters to end-users: solving the problem that matters to them while achieving a computational benefit. Irrespective of when true quantum computational advantage arrives, users always want to push towards solving the highest-value problems they can with the quantum computers we have.

Industry roadmaps over the next decade will see improvement in hardware size, baseline hardware performance, and the quality of QEC implementation. Many will even advertise the availability of QEC.

In the middle regime - before full large-scale, fault-tolerant machines are achieved - users will be forced to make tradeoffs as they try to extract as much computational value as possible from the limited hardware available to them. Running QEC will not necessarily be the best choice.  How should you approach this in practice?

Imagine you have Q logical qubits and each can execute P gates before failure is likely. We say that the logical performance is L=QP. This is a simplified picture of Quantum Volume that harkens back to Neilsen and Chuang, the foundational text in the field.

Overall for a given problem we want our quantum processor’s L to exceed the requirements of that algorithm. For instance, in some relevant cryptographic problems one may need L>10^12. That could be one thousand logical qubits executing one billion quantum logic gates each; on average that all has to complete before an error leads to an algorithmic failure in order to get a useful answer. Fortunately, many great papers report requirements in language just like this for key applications of interest to the community.

In the magical middle - the regimes between NISQ and full Fault-Tolerance with limited hardware resources - users will have to decide whether they dedicate all of their qubits to executing a computation, or whether they dedicate 90% of them or more to just performing QEC.

That is, will the effective “L” be bigger with bare qubits and the best alternate techniques, or with encoded qubits and QEC? That, in a nutshell, is asking if you’ve achieved QEC Advantage.

Let’s consider a concrete but hypothetical example:

Imagine you have 200 physical qubits available (approximately the largest QPU sizes available in 2024) and each can run 100 gates before failure. Let’s also imagine we’re beyond breakeven with QEC, where using QEC reduces error rates by a solid 3X (more than has been achieved so far). When you as a user execute an algorithm you’ll have to decide whether to use bare unencoded qubits with error suppression and error mitigation, or add in QEC.

With QEC those 200 qubits quickly turn into just say 20 available logical qubits, accounting for the resource overheads needed for encoding. The size of algorithm you can run is now limited by that number - 20 qubits. So even though P has increased by three using QEC we see a modest  L(QEC)= 20 X 300 = 6,000.

For simplicity let’s assume that error suppression and mitigation applied to bare physical qubits deliver the same 3X improvement as QEC. So, without QEC we achieve L(No QEC) = 200 X 300 = 60,000: Ten times higher than with QEC because the overhead penalty of QEC is removed!

In this example, despite having QEC available beyond breakeven, it can’t give advantage. The overhead and weak benefits of QEC, given limited hardware resources, are just not worth it!

We chose this example because the difference between 20 vs 200 qubits in a calculation can really mean the difference between computationally trivial and computationally intractable problems for a classical computer. The tradeoff in deciding whether or not to use QEC in the intermediate regime is real and material.

When will Quantum Error Correction Advantage arrive?

Beyond the QEC Advantage threshold the size and performance of hardware - and the improvements achieved through use of QEC - together motivate the preferential use of logically encoded qubits rather than any alternative approaches alone on bare qubits.

Current industry roadmaps make clear 1,000 qubit machines with the capacity to perform 1,000 quantum logic operations each (L=1,000 X 1,000) are coming within the next 5-7 years. And improvements in both base hardware performance limits (like T1, for the experts) and error suppression strategies are expanding achievable circuit depths (P) to their limits.

This is a very exciting regime.

Our rough estimates say that the threshold of QEC advantage will be about L~10^6-10^7, and this will likely be achieved after 2030.

Of course that is subject to change in a rapidly evolving sector; for instance, recent work from our customer Alice & Bob shows that it may be possible to trade qubit resources for alternate “bosonic” resources, which can accelerate the timeline to achieving L~10^6-10^7 with logical qubits.

In the meantime it will be better to work with larger numbers of bare unencoded qubits to solve computational problems, while execution on logically encoded qubits will primarily serve as a testbed.

Until then, manufacturers will push the frontier of what’s achievable with QEC, and in the background, Q-CTRL will be working to run hardware at the absolute limits, together bringing the QEC Advantage Threshold even closer! At Q-CTRL, where we aim to make quantum technology useful, we’re working to ensure everything needed for practical QEC is delivered to our hardware customers and the broader community!

Get in touch to learn more about how our professional quantum EDA tools for error suppression or fully integrated performance management can help you accelerate the path to Quantum Error Correction and deliver useful quantum computing for your most important problems!

This article was originally published in the Quantum Computing Report under the title, "What You Need to Know to Build a Quantum Implementation Roadmap with the Arrival of Quantum Error Correction".

Foundations

Take the next step on your journey with short articles to help you understand how quantum computing and sensing will transform the world