Technical blog

Choosing the right quantum error reduction strategy: A practical guide to error suppression, error mitigation, and quantum error correction

November 10, 2025
Written by
Tatiana Petkova

Quantum computers hold transformative promise, with applications in finance, AI, chemistry, logistics, and transportation all set to be radically disrupted by the emergence of new quantum-powered solutions. Development is accelerating—leading providers (IBM, IonQ, and Oxford Quantum Circuits) are projecting large-scale devices delivering quantum advantage by the end of this decade. 

Yet despite exceptional progress, today’s hardware remains limited by high error rates. Tackling these errors is the central challenge of the field; without effective reduction strategies, even the most advanced machines deliver unreliable results. 

With all of the excitement about recent Quantum Error Correction (QEC) demonstrations it might seem the problem is solved—QEC is here to save the day and errors are a thing of the past.

Unfortunately, not quite. In practice, you cannot execute a fully error-corrected algorithm on any hardware available now or in the near future; you can only implement pieces of QEC, meaning the utility delivered for your application may be limited. And in most cases, QEC makes the overall processor worse. So how can you attack the problem of managing errors to increase the value of your quantum computing investment today?

Software-based techniques that suppress or mitigate errors are essential to unlocking near-term value and paving the way to quantum advantage. So, which to choose? There is a complex interplay between the mechanics of error-reduction techniques and the characteristics of a specific application, making the selection of the best strategy for handling errors a complex and challenging task.

We previously wrote about the basic differences between the core techniques to address errors: error suppression, error mitigation, and quantum error correction. 

Here we provide a practical guide empowering you to understand how and when to select between these methods when executing your target quantum applications. We also include a comprehensive appendix covering a wide variety of applications so you can simply look up a recommendation for the TL;DR. 

Know your application first

The first step in choosing a strategy comes from understanding the key characteristics of your use case. Not all algorithms have the same structures or characteristics, and their variability against a few key metrics determines which error-reduction strategies will work best for a given use case.

Output type, workload size, and circuit characteristics are the key things to consider when selecting error-reduction strategies. We go through these in detail below.

Output type: full distribution or average only

Quantum tasks generally fall into two output categories: sampling and estimation. We will see below that because the outputs differ so wildly between these categories, some error-management strategies are fully incompatible with certain applications.

Sampling tasks can provide a range of viable outputs (over different measured bitstrings), all of which are valid “answers” to the algorithmic query. The constituents and shape of this distribution contain critical information that must be preserved. By contrast, estimation tasks simply seek only the average value of some measurable parameter. 

We can categorize many well-known algorithms against these types of output:

  • Sampling tasks (bitstring probabilities): generate measurement outcomes in the form of bitstrings whose frequencies approximate an output probability distribution within the quantum state produced by a quantum circuit or process. For example, sampling-based algorithms (like SQD or Quantum Monte Carlo), optimization tasks such as QAOA, and most future quantum algorithms and subroutines (like Grover, QFT, QPE, and eventually QEC).
  • Estimation tasks (expectation values): compute statistical properties of a quantum system—such as an observable’s mean value—based on repeated measurements of that system. For example, tasks common in quantum chemistry and condensed matter physics, variational algorithms, and simulations that rely on precise estimation of observables. 

Workload sizes and shot counts

Most quantum applications do not involve a single execution of just one circuit representing the problem of interest. Quantum tasks can vary significantly in scale, from single circuits to workloads involving hundreds or thousands of circuit executions. The number of circuits and number of shots-per-circuit-execution (samples) directly impact the amount of computational overhead and overall execution time required for an application. Between these, the circuit count dominates execution time unless an extreme number of shots is required.

Workload sizes can roughly be characterized as follows:

  • Light: fewer than 10 circuits, 
  • Intermediate: low 100s of circuits,
  • Heavy: 1,000s of circuits.

These characteristics matter because, as we’ll see below, the overhead included in different error-reduction strategies can vary by orders of magnitude. A light task might handle some additional overhead well, but a heavy workload can be rendered totally impractical with the wrong error-reduction strategy.

Circuit width and depth

Even at the level of individual circuits, quantum applications span a broad range of characteristics. 

As an example, circuits representing the time evolution of molecules and spin models can vary dramatically: simulating 3 time points of a 10-spin model on a line, for instance, requires 10 qubits and a circuit with a 2-qubit gate depth of only 3, whereas simulating the detailed dynamics of large molecules (like iron-clusters) requires a ~100 qubit circuit with 2-qubit gate depth of many 1000s. 

When required circuit widths are high, preserving available data qubits will be a high priority. For error-reduction strategies that involve physical redundancy, this might prove a steep penalty, leaving insufficient qubit resources for the execution of the core algorithms. 

For very deep circuits, users will typically face limits imposed by incoherent errors (so-called coherence limits). Since not all approaches to error management handle these errors equally, circuit depth may motivate selection of certain techniques over others.

Understand the limits of different error management strategies

Error suppression: A critical first line of defense for dominant errors

Error suppression corresponds to leveraging flexibility in quantum platform programming in order to execute a circuit correctly given anticipated imperfections in the hardware. 

It proactively reduces the impact of noise at both the gate and circuit levels by either avoiding errors (e.g. via circuit routing or choice of gate set) or actively suppressing them via the physics of coherent averaging and dynamical decoupling in gate and circuit design. This technology is deterministic, meaning it provides effective error reduction in a single execution without the need for repeated execution or statistical averaging. 

Error suppression is an effective first-line of defense for any application with any output characteristics. It can generally dramatically suppress coherent errors, but is unable to address incoherent errors exhibiting true randomness such as so-called T1 processes (AKA qubit incoherent lifetimes). 

You can read more about the underlying technology and its construction in this technical manuscript.

Error mitigation: Improvement comes at exponential runtime cost

Error mitigation addresses noise in post-processing by averaging out the impact of noise that occurs during circuit execution. Unlike error suppression, it does not prevent errors, but rather reduces the average impact of the noise on the output, as determined by performing many repetitions of the target circuit and post-processing the outputs. 

Commonly used methods include zero-noise extrapolation (ZNE) and probabilistic error cancellation (PEC). Both techniques are popular, and come with complementary tradeoffs: PEC provides a theoretical guarantee on the accuracy of solutions, at the cost of exponential overhead in preliminary device characterization, repeated circuit executions, and classical post-processing. On the other hand, ZNE obviates the need for exponential overhead, but in exchange omits performance guarantees. From here on, as we discuss these methods, we will keep PEC (and its closely related variants) in mind as it has gained considerable attention in recent times. 

Error mitigation methods have the benefit that they can compensate for both coherent and incoherent error processes. 

This comes with two very significant limitations. First they are not universally applicable. Importantly, they are not applicable when analyzing full output distributions of quantum circuits. This is a common requirement in most sampling algorithms, including some of the most important subroutines in the field. Second, the exponential overhead of most error mitigation methods applies a runtime performance penalty that can rapidly blow out to impractically long times.

These significant limits have largely constrained error mitigation routines to be deployed in the context of the simulation of physical and chemical systems, with limited relevance to other applications.

Quantum Error Correction: Powerful in principle, yet resource-intensive

Quantum error correction (QEC) is an algorithm designed to deliver error resilience via the addition of physical redundancy. Spreading quantum information across many qubits through the process of encoding provides a mechanism by which errors can be identified and corrected on a rolling basis as they occur. 

This technique is foundational to quantum information theory. It showed that even though information in quantum systems is continuous, errors can be made discrete when using QEC. This simple observation is fundamental to the notion that it is even possible to build arbitrarily large quantum computers.

In general, QEC codes can be designed to handle any form of quantum error (even including qubit loss), and any arbitrary algorithm can be crafted to execute on encoded logical qubits rather than bare physical qubits. Many algorithms require so many qubits and operations that only QEC can feasibly deliver the necessary effective error rates.

This universality comes at a tremendous cost which limits utility today and will continue to limit utility for years to come. 

First, QEC doesn’t “end” the existence of errors—it reduces their likelihood. The physical overhead rates required to suitably implement QEC and hit a target (reduced) error rate then depend on the selected code and the starting error rates. Unfortunately, in many extrapolations this ratio can be 1000:1 or more. Considering no one has yet operated a 1000 qubit quantum computer, this can be a significant penalty on today’s machines. For example, the recent Google demonstration with distance 7 surface code used a full device of 105 physical qubits to realize a single logical qubit. That is, even if things were working perfectly with their QEC demonstration, they’d only have one useful qubit—far too small to run any workload at all.

In addition, fault-tolerant execution (specifically, execution using QEC strategies that mathematically guarantee certain error-scaling properties) often runs thousands to millions of times slower than uncorrected circuits, severely restricting the feasibility of large-scale quantum computations. This of course comes from the error identification and correction process which must be executed, but also from the complexity in manipulating encoded logical qubits. Most codes don’t permit 1:1 operations between the constituents of two logical qubits in order to implement the same target operation (a property known as transversality). Accordingly, many additional operations are required in actual implementation, constituting a physical and timing overhead in logical-circuit execution.

In almost all demonstrations to date, even when logical-qubit error rates get lower than the bare physical-qubit values using QEC (a very hard threshold to achieve), the overall processor performance decreases due to these various forms of QEC overhead. The device will either be effectively much smaller or much slower to the point of greatly diminished utility.

More importantly, a comprehensive validated toolkit of QEC operations has yet to be fully demonstrated. Current demonstrations serve primarily as proof-of-concept rather than scalable solutions. Practical QEC use remains constrained to small-scale experiments until quantum hardware advances significantly, especially given the numbers of qubits needed to achieve Quantum Advantage. 

Figure 1: Relationship between quantum error suppression, mitigation, and correction including example use cases.

Matching strategies to your quantum workloads

Now that you understand the critical characteristics of both your applications and the potential error reduction strategies available, it’s time to align these two sets of constraints.To help with this, we’ve made a quick summary that helps illustrate the key choices from a practical view, and we provide a comprehensive table in the appendix.

Figure 2: Applicability of error suppression, mitigation, and correction across quantum task types, workloads, and example use cases.

To suppress or to mitigate on today’s machines?

First, QEC is largely only used for scientific demonstrations on machines available today. Running algorithms using logically encoded qubits will remain less effective than alternate strategies until machines become much bigger. Otherwise if your workload requires QEC, you’ll have to wait for machines to get significantly larger. For a deeper dive on this issue, read more here.

So for workloads accessible on today’s machines you only need to choose whether to implement error suppression or error mitigation. Above we outline which classes of problem are compatible with error mitigation; now we’ll focus our discussion on problem scale when considering estimation tasks.

In execution, error suppression primarily introduces timing overhead during classical pre-processing phases like circuit compilation. This pre-processing typically adds only seconds or minutes to total execution time regardless of workload scale. Once compiled, circuits run natively on hardware, with minimal additional quantum runtime overhead. For workloads involving 100s to 1,000s of circuits, error suppression keeps total quantum execution times essentially unchanged—ranging from seconds or minutes for small tasks to only a few hours for the largest workloads.

For light workloads, error-mitigation overhead is manageable and the technique can yield precise results, especially when incoherent errors are problematic. However, for intermediate to large workloads, the exponential quantum-execution overhead of error mitigation quickly becomes impractical. For workloads involving 100s to 1,000s of circuits (typical for quantum optimization or machine learning) mitigation overhead can extend quantum runtimes from a few hours to weeks or months

A concrete example:

Assume that we want to estimate the expectation value of a 20-qubit Hamiltonian associated with a molecule or an electronic lattice model, under some state that we prepare with a quantum circuit.

In order to do so, we need to write the Hamiltonian as a superposition of 20-qubit Pauli strings. Let's assume that, when doing so, we find that the Hamiltonian can be expressed using 100,000 Pauli strings. (It sounds like a lot, but the number of Pauli strings over 20 qubits is larger than a trillion–so this is a sparse Hamiltonian. Luckily, many of these observables are commuting, meaning they can be measured using the same basis and we can divide the Pauli strings into related groups).

For 100,000 observables it is common to find ~5,000 such groups, and for each group we’ll have to run the circuit of interest and measure using the relevant basis. Overall, we'll need to run 5,000 circuits with a number of shots that is determined by the accuracy we wish to have, let's say 10,000 shots each. Execution of 10,000 shots takes about 3 seconds, and therefore, 5,000 circuits are likely to consume 15,000 seconds, a bit more than 4 hours.

Using error suppression, that 4-hour execution time remains approximately unchanged. But with error mitigation things can look quite different.

Assume that we wish to use the probabilistic error cancellation (PEC) error mitigation method for such a use case, and each such circuit requires 1 execution hour due to overhead (a conservative estimate since such circuits tend to be deep). The total execution time will increase to around 15,000 hours, which is over 200 days. Clearly, this is a case of a large workload where the overhead of PEC renders it impractical. Even variants of PEC that are designed to be more efficient will still scale exponentially, so although they might provide 2-3x improvements, the total executions still remain in the 10s of days.

Strategic compatibility: Combining methods effectively

Quantum developers can maximize outcomes by strategically combining error-reduction strategies in ways that bring together complementary strengths:

  • ✅ Suppression + mitigation: Suppression reduces the baseline impact of noise, thereby making mitigation more effective and less resource-intensive, and ensuring all classes of errors are efficiently reduced. This can be a potent combination in the NISQ era.
  • Suppression + QEC: Lowering physical error rates through suppression can significantly ease QEC’s stringent resource demands. Doing so also therefore enables the maximization of performance on existing hardware, such as by improving the quality of QEC primitives. We see this as a key pathway for the future in large-scale quantum computing, and one already validated on real hardware.
  • Mitigation + QEC: As QEC is a runtime method and mitigation is a post-processing-based method, they are not trivially compatible. Recent research suggests that implementing error mitigation after QEC at the logical level is theoretically possible, though there is not yet a consistent paradigm for characterizing noise models and performing mitigation at the logical level. More unfortunate still, this combination would multiply two sources of exponential overhead–one in time and one in qubits—potentially erasing even exponential scaling gains coming from the use of a quantum computer and making it unlikely that this combination will ever be practical for large-scale computing. 

Strategically using these methods together (or not) ensures optimal performance on current hardware and prepares for future advancements.

Figure 4: Combining error-reduction methods to maximize current hardware performance while preparing for future error corrected quantum computing.

Conclusion: A balanced, strategic path forward

Effectively managing quantum errors necessitates clarity on when and how to apply suppression, mitigation, and correction. Whether performing precise spectroscopy for quantum chemistry, extracting expectation values in variational algorithms, or scaling quantum optimization workloads, tailoring error-reduction strategies significantly impacts practical outcomes. So here are our key takeaway recommendations:

  • Always implement error suppression: Implement suppression universally across all quantum tasks to ensure maximum baseline performance.
  • Selectively apply mitigation, combined with error suppression: When workload size remains manageable and requires the estimation of high-accuracy expectation values, error mitigation can complement error suppression in achieving better estimation with relatively moderate overhead.
  • Prepare for the arrival of practical QEC in the long term: Continue tracking the development and advancement of QEC technologies, maintaining realistic expectations regarding their near-term practical constraints.

As we look ahead we’re also excited by the prospect of new high-efficiency partial-QEC strategies that can deliver much higher value in the near term. For instance, recent demonstrations that combining gate-level optimization with bosonic encoding could get superconducting devices beyond breakeven, or that leveraging QEC primitives like error detection without full encoding could provide net improvements in device performance - including enabling a record rigorously benchmarked 75-qubit GHZ state—show that truly hybrid approaches may be the most promising pathway forward.

By taking a balanced, strategic approach—anchored by universally applicable error suppression, augmented selectively by mitigation, and preparing strategically for quantum error correction—quantum developers can optimize today's quantum hardware while paving the way for future quantum advancements.

Appendix: Detailed use case examples

Use case and field Description Task type and workload size (excluding overhead) Applicable error-reduction strategies
Use case: Known state properties

Field: Quantum chemistry (condensed matter physics)
Estimating a small set of correlations in a state we know explicitly.
e.g., prepare a GHZ state and calculate all ZZ correlations.
Task type: Estimation

Workload size: Light

Number of circuits equal to the set size of non-commuting correlations. In the extreme case where all correlations commute (Z basis), then one circuit is needed (1–5 sec assuming 4000 shots/sec).
Error suppression
Error mitigation
Use case: Fixed time evolution

Field: Quantum dynamics & simulations
Similar to the first example above, only here, the state is not known explicitly; instead it is the time evolution of a known initial state, under a specific H for a fixed time. Task type: Estimation

Workload size: Light

Similar to known state properties use case above, with the exception that the circuits are likely to be much deeper depending on the evolution scheme.
Error suppression
Error mitigation
Use case: Quantum phase estimation

Field: Quantum chemistry (electronic structure)
A deterministic method to extract spectral properties of a known Hamiltonian with circuit representation. Task type: Sampling

Workload size: Workload specific

There are different flavors of QPE, but in its basic form, it is a single circuit execution. In a commercial context, such circuits are not near-term.
Error suppression
Use case: Sampling-based eigen solvers

Field: Quantum chemistry (electronic structure)
Such methods can be a part of classical pipelines for calculating ground state properties which can then be used, for example, in catalysis calculations. One such method is SQD. Given a model (molecule), data points are collected on the QC and passed to a classical pipeline which extracts the ground state of the model. Task type: Sampling

Workload size: Medium

Depending on the Hamiltonian (molecule) parameters that need to be sampled and on the method of optimizing the variational circuit parameters if any. The number of circuits to run range from low tens to low hundreds (assuming 3–12 sec per circuit, full sampling range from a few min to ~hour). For challenging catalysis problems, such a procedure may require many hundreds of circuits to thousands of circuits.
Error suppression
Use case: Limited dynamics

Field:
Quantum chemistry (dynamics)
Unlike fixed time evolution, in this use case we are interested in the evolution of a property over a short time scale, rather than its value at a given time. For this, experiments similar to fixed-time evolution must be repeated for a range of time points. Such problems are common in scientific settings where the crude dynamics of operators can reveal interesting characteristics of models. Task type: Estimation

Workload size: Medium

The number of circuits depends on the number of properties we wish to extract (non-commuting observables), the number of initial states we wish to consider and the number of time samples (for non-detailed evolution, sampling ranges between 10 to 100 points in time). Overall, the total number of circuits ranges between 10 to several hundreds of circuits.
Error suppression
Error mitigation only when the range of circuits is on the low side.
Use case: Detailed dynamics & spectroscopy

Field: Quantum chemistry (dynamics)
Commercially relevant quantum simulations require detailed high precision sampling, for example, extracting precise spectroscopic data (like NMR) or reconstruction of fast dynamics (understanding how photo excitation decays, how chemical bonds break, the dynamics of biological systems out of equilibrium, etc.). Task type: Estimation

Workload size: Heavy

Accounting for the required dense time resolution, the need to sample different configurations, observables and initial states, such workloads may include anywhere from several tens of thousands to hundreds of thousands of circuits (10 to 200 hours without any overhead).
Error suppression
Use case: Closed-loop eigen solvers

Field: Quantum chemistry (condensed matter physics)
Closed-loop eigensolvers find ground states of models (Hamiltonian) by iteratively optimizing Ansatz circuit parameters. Yet, these methods trade extremely heavy classical procedures (all current demonstrations of SQD use classical supercomputers) with back-and-forth communication between a simple classical optimization algorithm and a QC. Task type: Estimation

Workload size: Workload specific

The number of circuits depends on the number of non-commuting observables to estimate (for commercial problems these can range from low hundreds to tens of thousands), the number of optimization steps needed (which ranges from many tens to many thousands). Overall, at commercial settings, such workloads may include anywhere from several tens to hundreds of thousands of circuits (10 to 200 hours without any overhead).
Error suppression
Use case: Deterministic search algorithms (such as Grover)

Field: Search
These are algorithms that search for a specific encoded mathematical task. While these circuits tend to be deep, the number of circuits to execute is minimal (single to a few). Task type: Sampling

Workload size: Light
Error suppression
Use case: QSVM, Quantum k-means, QPCA, QNN, etc.

Field: Quantum machine learning (QML)
There is a large variety of QML methods. Most of them rely on training a parametric circuit and then use it as a predictor or classifier. The training process depends on the number of data points used for training and the number of optimizable model parameters. The training step may need thousands to millions of circuits (depending on the size of data). The prediction part may require a small number of circuits depending solely on the number of points we wish to predict. Most QML methods rely on full sampling but some may rely on specific expectation values. Task type: typically Sampling & occasionally Estimation

Workload size: Medium at the prediction stage and Heavy at the training stage.
Error suppression
Use case: Adiabatic evolution, QAOA, quantum-centric optimizers

Field: Optimization
Algorithms that aim to find optimal solutions of binary/integer classical cost functions (may include constraints). Some of these methods require an iterative process of circuit parameters optimization while others require a back-and-forth information transfer with classical solvers. Typically the number of circuits ranges from several tens to several hundreds. Task type: Sampling

Workload size: Medium
Error suppression
Use case: Quantum Monte Carlo (QMC) and other sampling algorithms

Field: Finance
These algorithms are typically based on the preparation of an initial state that encodes a relevant probability distribution followed by a sampling algorithm such as QMC. Task type: Sampling

Workload size: Medium to heavy
Error suppression