Case study

Enabling data loading for quantum machine learning with Fire Opal

Client
THE CHALLENGE

New applications in quantum machine learning (QML) are limited by the process of classical data loading, which is highly susceptible to degradation from hardware noise and error.

THE OUTCOME

Using Fire Opal, BlueQubit demonstrated groundbreaking loading of complex “distribution” information onto 20 qubits for a QML application by reducing the effect of noise and error in the loading process. This is an exciting example of how Q-CTRL’s focus on “AI for quantum” can drive direct advances in our partners’ efforts to apply “Quantum for AI.”

IMPACT
8X

Better performance in terms of Total Variational Distance (TVD), which measures the deviation from perfect data loading.

PRESS

Innovating new data-loading methods

The emergence of AI has placed a renewed emphasis on the development of quantum machine learning techniques to support the next generation of advances. An explosion of interest in the nexus between quantum computing and AI has emerged as researchers and users look for techniques to reduce computational burdens and improve training efficiency.

In most quantum machine learning applications—as well as other parallel applications such as Monte Carlo simulation on quantum computers—the first critical step is loading of data into the quantum computer. In the case of QML, the data representing the problem must be directly encoded into a quantum state as an input of the quantum operations to follow.

This can be a challenging process for quantum computers which are not efficient “big data” systems; developing efficient and accurate data-loading routines is essential to accelerating our path to quantum advantage.

Right now the data-loading techniques in play are largely based on variational algorithms which are amenable to near-term hardware. However these approaches – while efficient in the amount of quantum hardware required – suffer from convergence challenges and poor scaling of operations as the number of qubits grows.

Seeking a new approach to this crucial process, BlueQubit developed a novel method for data loading, which prevents the issues of “barren plateaus” and exponential scaling of required operations with data size.

Demonstrating the value of this approach requires hardware execution, and here noise and error generally mask the value of the novel technology in use. Using Q-CTRL’s AI-driven error suppression package, Fire Opal, the team at BlueQubit was able to reduce the impact of hardware error and scale to load a complex distribution of information on 20 qubits.

Learning the dataset one step at a time

BlueQubit’s innovative method uses a type of quantum machine learning algorithm called a Quantum Circuit Born Machine (QCBM). The goal of QCBMs and other techniques within the broader generative modeling family is to “learn” the overall probability distribution—the shape of the data—of a given dataset using only a sample of data points.

Generative modeling is useful for efficient data loading because instead of loading every data point individually, you can load only a subset of the data and infer the rest.

Expanding on this concept, BlueQubit developed a way to make the learning process even more efficient. By breaking the process of learning the probability distribution into a series of steps, starting with fewer qubits and gradually introducing more, their method avoids some of the typical challenges of data loading with deep circuits, such as the emergence of barren plateaus in the training process. This incremental process developed by BlueQubit is called Hierarchical Learning for Quantum ML.

As a simple analogy to illustrate this hierarchical learning method, imagine you have a large jigsaw puzzle to solve. It’s a complicated image, and you’re only allowed to select a set number of pieces to solve at a time. You start with a small section of the puzzle, where you can see there are a few distinct colors that match up.

After solving this portion of the puzzle, you gain partial insights into the pattern in the image. Using this pattern, you can identify additional pieces which match. With each new piece you add, you're building on what you've already learned, making the puzzle a bit bigger and more detailed each time. And as you expand the puzzle, you also track how well it matches the real picture. You constantly compare what you've got so far to the actual picture, making adjustments to get closer to the real thing.

In the quantum domain, the concept holds, but we’re forced to consider another additional complication: hardware noise and error. In our puzzle analogy, noise causes these puzzle pieces—which are extremely fragile and easily weathered—to fit together poorly or easily fall apart (imagine trying to assemble the puzzle on a moving bus!). Because of the noise, it’s hard to hold the pieces together long enough to get a sense of the puzzle’s pattern. Applying noise and error suppression through Fire Opal is like having magic tape to hold the pieces together so that after each section, you can still see the pattern and easily move on to solving the next section of the puzzle.

Revealing the power of Fire Opal on real hardware

BlueQubit’s hierarchical learning is specifically constructed with the considerations of today’s quantum hardware in mind. To ensure that it addresses the typical challenges of data loading, they tested it out using two different distributions, which represent different shapes that datasets may have.

Experiments were conducted on three different IBM Quantum device configurations to compare performance:

  1. Ideal Simulator: A classical simulation of a perfect quantum device
  2. Vanilla IBM: Qiskit Runtime’s Sampler primitive with default settings i.e. resilience_level=1 and optimization_level=1
  3. Fire Opal Optimized IBM: Automated, AI-driven error suppression from Q-CTRL

After loading the data, the resulting distributions are compared using a metric called the total variational distance (TV). In simple terms, the total variational distance measures the dissimilarity between two probability distributions by calculating the absolute difference between the values of each outcome in the distribution (each bar in a graph) and summing up these differences. It provides a way to quantify how much the two distributions differ, with larger values indicating greater dissimilarity. So in the quantitative graphs below (Figure 2), “low is good”.

Full technical details on the methodology and experiments used in this approach can be found in BlueQubit’s blog.

The first set of experiments tests loading Gaussian (normal) distributions into a quantum circuit representation.

It’s straightforward to see in experiments that using Fire Opal is transformational in your ability to accurately represent a data distribution. In the graph below, you can compare the experimentally measured shape of the distributions for a specific 7-qubit example in the following histograms.

Figure 1. Visualizations of the output distributions of a 7-qubit, 24-CNOT circuit run across the different execution modes. The red line is the underlying true distribution. The quantum device used here is the 7-qubit IBM Lagos.

The ideal distribution is normal and is indicated by the red line, and it’s clear that the Fire Opal optimized distribution (purple data) looks much more similar to the ideal simulator, which confirms that the data shape is preserved during the loading process. By contrast, the “vanilla” implementation on hardware (red) actually shows a distribution that appears almost inverted relative to what’s expected. This one simple example reveals how much impact Fire Opal’s error suppression technology can deliver.

Improving results by a factor of 8x

Building on the transformational results above, the BlueQubit team engaged in a detailed experimental study exploring the quantitative efficacy of their method for data loading and the impact of Fire Opal on success.

The first quantitative experiment looked at testing different circuits for the data loading procedure. As a general rule of thumb, increasing the number of 2-qubit (CNOT) gates should improve the quality of data loading, but adding more gates also means introducing more noise.

To test this tradeoff, BlueQubit selected four circuits for the initial experiment, preparing the same distribution with differing numbers of 2-qubit gates: 6, 12, 24, and 66 CNOT gates. In the graph below, the team measured and calculated a variation of the TV metric (see figure caption) which again ranges from 0 to 1, with values close to 0 being “good.”

Figure 2. TV4 values for four circuits ran across the three different execution modes. Here TVn, is used, where n=4. The 4 indicates that the outcomes corresponding to the 4 most significant qubits are measured, which means that 16 total outcomes contribute to the TV score (2n).

The graph shows that the Fire Opal optimized circuits always resulted in a better data loading outcome than the Vanilla IBM runs. On the 7-qubit, 24-CNOT circuit, Fire Opal’s improvement reaches a factor of 8x. Using Fire Opal to suppress noise and gate errors it was also possible to show a balance where adding more CNOT gates can actually help the loading process!

Scaling to groundbreaking levels: data loading for 20 qubits

The next challenge was significantly harder. BlueQubit chose to demonstrate data loading on up to 20 qubits where not only does the size of the circuit scale with more qubits, but the chosen distribution—a bimodal Gaussian mixture distribution (two peaks)—is also more complicated to learn.

In the puzzle analogy above, the pattern to identify has suddenly become much more subtle and confusing because different parts of the puzzle are similar!

Once again you can see in the experimental data that even at larger sizes, Fire Opal is able to accurately reproduce the target distribution. Fire Opal not only captures the distribution’s bimodal nature, but it correctly reflects the detailed features of the distribution – the Gaussian widths and their asymmetries! Meanwhile the standard hardware implementation struggles and appears to guess the distribution’s asymmetry wrong—a bit like assembling the puzzle upside down.

Figure 3. Visualizations of the output distributions of the same 16-qubit, 38-CNOT circuit ran across the different execution modes. The red line is the underlying true distribution. The quantum device used here is the 27-qubit IBM Algiers.

As circuits scale to 20 qubits, quantitative results also show the same pattern as before: Fire Opal improves the performance of larger circuits where noise is more likely to degrade the data loading process. For full experiment details and quantitative data analysis up to 20 qubits, refer to the technical manuscript.

Opening up a realm of possibilities

With the help of Fire Opal, BlueQubit was able to validate and demonstrate that their new hierarchical learning approach actually improves data loading, while avoiding the typical noise and error challenges that arise with increasing scale. This allowed the capabilities of the novel approach to shine, and together, these techniques demonstrate significant improvements to a critical component of many quantum algorithms. And as BlueQubit conducts further research into methods to make quantum computing useful, Fire Opal will continue to be pivotal in maximizing the potential of quantum devices while ensuring that noise interference is minimized.

These tests validate that by reducing circuit depth and suppressing noise, Fire Opal addresses the constraints posed by today’s hardware devices and makes it possible to load complex data successfully for new applications in quantum machine learning!

If you’re exploring quantum machine learning, Fire Opal is the right tool to help you demonstrate the impact of QML on the applications that matter to you.

Get started today to experience how Fire Opal can amplify the results of your quantum research and applications.

Get started now

Make quantum technology useful