Q-CTRL digest

Using machine learning to automate the optimization of quantum computers

February 26, 2021
Written by
Michael Hush
Michael Hush
 Yuval Baum
Yuval Baum

Last month Dr Michael Hush, Chief Scientific Officer at Q-CTRL and Dr Yuval Baum, Lead Quantum Automation Scientist at Q-CTRL hosted our automated closed-loop optimizations webinar. The webinar covered our ground-breaking new AI-based tools that enable quantum computers to self-tune for unparalleled results, all available in Boulder Opal. Essentially our team of researchers is leveraging machine learning to accelerate the performance of quantum computing hardware - and all without the need for human intervention.

For existing Boulder Opal customers you can learn more about this exciting new feature and how to implement it through our code-based User Guide: Automate closed-loop hardware optimization of quantum devices.

We had 100s register and attend, and the participants asked some great questions! In this blog post we are happy to share with you some of the audience Q&A, and answer additional questions that we weren’t able to address live.

Audience Q&A

Q: How does automated closed loop optimization compare to Bayesian optimization techniques?

A: We have a variety of different learners available within the automated closed-loop optimization feature. One is called a Gaussian Process which is actually a traditional Bayesian way of optimizing multi-variable systems. The other is our machine learning option which leverages a neural net to create a model linking your system, the available control parameters, and your cost function. We will also soon be adding a reinforcement learning option based on our own successful demonstrations on IBM hardware. You can watch the live answer here.

Q: How do you define the cost function in the automated closed-loop optimization feature?

A: You have the flexibility to define the cost function in any way you want - it really depends on your specific application you have in mind. For those users interested in more exotic problems than those we’ve covered you will most likely need to define your own cost function, and we’d be happy to help.

You can watch the live answer here.

Q: Is your closed-loop optimization technique applicable to optimizing annealing parameters?

A: Broadly speaking as long as you can define a task and the cost function for this task then the exact problem that you’re trying to solve is not important - the automated optimization tool can handle it.

You can watch the live answer here.

Q: Which data do you use to feed your machine learning routine for applications in quantum computing?

A: The required data are identification of the parameters that can be controlled by the experiment, the cost function definition, and the cost value you measure. Ideally you would also include some measure of uncertainty associated with that cost, as this informs the machine learner how much to trust the data - how much of that cost evaluation is real and how much arises from random fluctuations. This information allows our machine learning engine to best incorporate the new measurement information it’s receiving in the process of making a better prediction to guide next actions.

You can watch the live answer here. Q: Is there a possibility to construct a hardware model from input/output data and would that help to improve the optimization?

A: This is a workflow that’s available currently independent of the automated optimization engine. You start with a process known as system identification to determine model parameters and then move on to performing model based optimization using this information.

Q: How sure can you be to reach the global minimum and not get stuck in a local minimum?

A: We are not claiming global optimality (that’s an extremely difficult claim to make!). Instead we focus on tuning our learning agents to minimize the likelihood of getting stuck in a local minimum. Q: Are you able to model system drift (eg. via a random walk)?

A: We have previously shown that our work with reinforcement learning allows the agent to discover drift processes and these dynamics are then autonomously incorporated into the agent’s own model. Separately we have also developed time-series analysis capabilities and have used both autoregressive Kalman filters and Gaussian Processes to perform predictive estimation for systems experiencing drift.

Q: Is the closed-loop optimization tool part of your firmware layer or are these two independent products?

A: The automated optimization package is already available in our Boulder Opal software. “Firmware” is a designation we use to highlight where this functionality is used in the quantum computing stack, rather than a separate product. So if you’re a Boulder Opal user, it’s already available to you!


Read an overview of how Boulder Opal’s AI tools can be used to automate the tune-up and optimization of quantum hardware systems.

Read our blog post to discover how we used an AI based optimizer to tuneup hardware in support of our 9,000x algorithmic benchmarking results.

If you would like to be notified about upcoming webinars please subscribe to our newsletter.

Latest news and updates