Advanced

How data centers can learn from GPU adoption to get ahead in the quantum race

Explore how data centers can leverage lessons from GPU adoption to accelerate quantum computing readiness and gain a competitive edge.

Market interest in quantum computing is rapidly increasing, with a growing emphasis on “when” to adopt this revolutionary technology, rather than asking “if” it’s relevant. Most discussion has focused on trying to align investment with forecasts for when quantum computers are likely to outperform the best classical alternatives – a threshold called quantum advantage. It’s easy to think about waiting another year or two to act - inaction is always easy. If we’ve learned anything about past technology adoption trends, early movers reap a majority of the reward. And this is borne out by detailed analyses – the Boston Consulting Group estimates early adopters will capture up to 90 percent of the benefits and will gain long-term strategic advantages that are difficult to overcome.

This article will make the case for data-center operators and others in the high-performance computing space to take immediate action, providing actionable insights to get started immediately.

History shows that transformative compute paradigms require years of preparation before delivering real returns. Graphics processing units (GPUs), for example, took more than a decade of groundwork before fueling the AI revolution that now powers almost every sector of the economy. Organizations that invested early positioned themselves to capture this growth, while those who waited paid more, were caught flat-footed, and lost ground to competitors.

Quantum will follow the same trajectory.

The time to prepare is before the technology fully matures, when systems are smaller, the stakes are lower, and expertise can be built gradually.

DCD article image (3)
GPUs powered the AI boom after a decade of groundwork, with the more prepared operations positioned to capitalize on this new AI-centric era; quantum computing is on a similar path– Q-CTRL.

Prepare for quantum integration

For many outside the quantum field, integration is often perceived as simply adding another accelerator card or extending a cloud application programming interface (API). In reality, bringing quantum into an enterprise or high-performance computing (HPC) environment is a more nuanced transformation. The fundamental building blocks of the technology are different, with completely separate systems running a new workload paradigm: the quantum circuit. This requires preparation across multiple dimensions, and if rushed, can quickly spiral into a longer, more complex, and more expensive process. One basis of the argument to act now, versus waiting, is that a gradual onboarding of this technology will be more cost-effective and leave the proper timing to mature adequately across several areas:

  1. Build familiarity with new hardware: Unlike central processing units (CPUs) and GPUs, quantum processors often require infrastructure that differs from the standard cooling used today; for instance, specialized cryogenics or ultra-low-noise analog electronics. Data centers must learn how to house and support QPUs well before they can be deployed productively.
  2. Adopt continuous infrastructure and resource management: Quantum systems must be orchestrated alongside classical compute and held to the same reliability, availability, and serviceability standards. The challenge is that today they require constant calibration and tuning to preserve hardware performance. Learning these practices on today’s smaller machines and deploying emerging tools for automation is essential preparation for the arrival of larger systems.
  3. Embrace hybrid workloads: The future of workloads will involve both classical and quantum components, which must be orchestrated together. This hybrid model will bring new requirements that span diverse resources. To harness quantum, operators must adapt workflows for hybrid jobs that include error suppression, circuit optimization, and feedback loops between quantum and classical processing resources. For example, in a recent collaboration with NVIDIA, Q-CTRL demonstrated GPU-accelerated graph algorithms that both scale quantum compilation and enhance classical machine learning performance, highlighting how quantum innovation can simultaneously unlock new value in classical computing. Developing this capability early ensures that quantum can be embedded into high-value applications.
  4. Ready human capital: The emergence of new tools for true autonomy and abstraction of quantum hardware is removing the burden for HPC and Data-center operators to be experts in quantum physics. Building expertise in the interface of quantum and classical, enabling staff to manage and oversee quantum systems as a new hardware class, and interpret the outputs from the various software layers, can give a massive strategic advantage. This fluency only comes from educational preparation and hands-on engagement. Self-service educational tools, like Black Opal by Q-CTRL, can give personnel the technical context they need to understand the quantum technology space. Then, giving those employees access to real systems to manage and run circuits is the best way to reinforce that learning and build up a level of comfort with quantum computers.

Turn preparation into a competitive advantage

Most organizations vividly remember the ChatGPT moment and the scramble to respond and adapt that came across enterprises in its wake. Acting early gives a chance to learn from that experience and be on the front foot as quantum advantage arrives.

Investing in readiness today reduces both risk and cost. By spreading integration work over time, organizations avoid the disruption and price premium of a sudden adoption push once the full enterprise value of quantum computing is achieved. Budget holders know that rushed, unplanned programs often exceed forecasts and erode margins. Smaller projects with clear deliverables can be managed within existing budgets and allow lessons to be learned incrementally, lowering both financial exposure and operational risk. For decision-makers, this creates a predictable investment profile rather than a costly “big bang” rollout.

Early engagement also builds skills at a fraction of the future cost. Recruiting or retraining talent under pressure once the market overheats will be significantly more expensive. The AI sector offers a clear precedent: organizations that built expertise five years ago did so at a fraction of today’s salaries, while late adopters are now struggling with inflated salaries and scarce talent. Quantum will follow the same trajectory, making today’s training and software adoption both a technical and financial advantage.

Finally, preparation positions data centers to capture new revenue streams when competitive quantum workloads emerge. Studies predict the quantum applications market to grow at a CAGR of 23.5 percent, far outpacing the HPC growth CAGR of 7.2 percent. These applications need to run somewhere, and facilities offering quantum capabilities first will attract premium enterprise and research customers and secure long-term relationships. Those who wait risk being left behind as competitors set the terms of the market. For decision-makers, the choice is clear: early preparation is the path to durable competitive advantage.

Early action creates the pathway to scale

Determined to act now, many decision makers will then naturally consider – “Where do I start?” By preparing early and getting time on your side, figuring out where to get started becomes much more manageable. Early projects can be scoped with limited disruption, using small systems and niche workloads as the ideal testbeds for building readiness and skills that will scale with the technology. Here are some examples of initial projects that can serve as an easy on-ramp:

  • Use the hardware procurement process to learn the intricacies of quantum systems: Commercially viable quantum systems are available now, meaning buying a quantum computer is no longer a distant prospect. Options range from starter systems with “base” hardware costing in the low millions of dollars to fully integrated quantum computers at $50M+, enabling organizations to choose the entry point that fits their scope and budget. Vendors like TreQ, QuantWare, and Q-CTRL are working together as ecosystem partners to make system procurement easier so organizations can begin testing how to integrate quantum hardware systems into their environments. By going through an evaluation process, organizations can make sense of the various qubit modalities, system performance metrics, and operational analyses involved with buying a quantum system. Even if you don’t pull the trigger and buy, the process will provide a valuable foundation, preparing you to purchase when the time is right.
  • Install a quantum computer and learn how to manage it: Owning hardware in itself does not guarantee access to a high-performance quantum asset. Quantum processors currently require constant calibration and tuning: a new class of observability within a data center. Fortunately, specialized software, like Boulder Opal from Q-CTRL, is building frictionless, end-to-end workflows where quantum systems can be brought online with the same autonomy and speed that classical compute resources enjoy today. Data centers can start familiarizing themselves with these system requirements and integrating the data streams from their quantum systems into existing asset-management stacks. As the quantum market expands to larger, more expensive systems, there will be a lot of pressure to keep those high-value assets online and running useful workloads capable of returning high-impact results. Working out the details of that asset management now will pay major dividends on future deployments where clients are expecting assets to be available.
  • Run meaningful algorithms in the NISQ era: Even on today’s noisy intermediate-scale quantum (NISQ) computers with 25–200 qubits, carefully executed circuits combined with error suppression can deliver useful results on commercially relevant tasks. These machines can accelerate sub-tasks of important problems even before they’re large and functional enough to run the most valuable long-term quantum algorithms. Organizations can identify a pilot client or internal workload to run their first quantum algorithms. A common example is the class of optimization problems underpinning supply chain, scheduling, and routing applications. In these problems, a high-value subset of variables can be adapted to a quantum circuit and run on real hardware today with tools like Fire Opal by Q-CTRL. Fortunately, software abstraction enables end-users with problem-level expertise to run these algorithms with no need to understand the details of hardware performance management or execution. For data centers, the takeaway is clear: start with real business problems, identify subsets that are quantum-ready, and implement hybrid solutions today using the advanced virtualization tools that deliver value now and in the future as machines scale.
  • Learn from your peers: Small-scale projects enable operators to test how quantum systems integrate with monitoring, scheduling, and job management, thereby building the institutional knowledge necessary to scale when larger systems are introduced. Across the industry, live efforts with groups such as Jülich Supercomputing Centre, Pawsey Supercomputing Centre, and Oak Ridge National Laboratory demonstrate that quantum is already being embedded into real workflows, not just explored in isolation. These early initiatives are testing and evaluating how quantum fits within existing HPC practices. Teams that learn these details on smaller systems will be best positioned to expand seamlessly as workloads grow.

These early steps can provide real institutional experience at a manageable scale, reducing future risk and smoothing the adoption curve as systems expand.

Act now, and your data center will define the quantum future instead of chasing it

Quantum adoption is best understood as an investment in future competitiveness, just as early GPU adoption positioned leaders to dominate the AI era. The organizations that begin preparing now will spread costs, build capabilities, and be ready to deliver quantum value the moment advantage is realized. Waiting until the technology is fully mature will mean paying more, moving more slowly, and ceding ground to faster competitors. The lesson from history is clear: start building today so your data center is prepared to capture tomorrow’s quantum opportunities.

This article was originally published in Data Center Dynamics under the title "Invest in quantum adoption now to be a winner in the quantum revolution".

Foundations

Take the next step on your journey with short articles to help you understand how quantum computing and sensing will transform the world

Beginner foundations

View all

Advanced topics

View all