JUNE, 24th 2025
08:30 → 09:00
WELCOME COFFEE
09:00 → 09:15
WELCOME and introduction
A – Stakeholder needs
09h15 → 10h30
A1 – Industry Needs
09:15 → 09:30
What we need to embrace quantum computing – an expression of user needs
Andréa Le Vot, Crédit Agricole
09:30 → 10:15
Pannel of europeans industrials chaired by Cyril Allouche, Eviden
Etienne Decossin, EDF
Romain Kukla, Naval Group
Jean-Patrick Mascomere, TotalEnergies
Emanuele Marsili, Airbus
10h15 → 11h15
A2 – HPC Center needs
10:15 → 10:30
Needs for Benchmarks towards Utility-scale QC-HPC hybrid computing
Mitsuhisa Sato, Riken
10:30 → 10:55
HPC/QC benchmark : a happy marriage or an contradictory alliance ?
Patrick Carribault et Philippe Deniel, CEA
10:55 → 11:15
Calculating resource overheads for fault-tolerant photonic quantum computing
Boris Boudoncle
11:15 → 11:35
COFFEE BREAK
11h35 → 12h55
A3 – Providers needs (HW, SW…)
11:35 → 11:50
Hamiltonian simulation benchmarked on a trapped-ion quantum computer
Etienne Granet, Quantinuum
11:50 → 12:05
Benchmarking optimisation problems with neutral atoms
Constantin Daylac, Pasqal
12:05 → 12:20
Scalability and universality of quantum benchmarks: perspectives from C12
Chloe Ai, C12
12:20 → 12:35
Benchmarking quantum algorithms with tensor network simulators
Carlos Marimon, Quobly
12:35 → 12:55
Compilation Strategies for Distributed Quantum Architectures
Mathys Rennela, WELINQ
12h55 → 13h55
LUNCH
B – Scientific progress on initiatives on benchmarking
13h55 → 15h25
B1 – Low-level hardware Benchmarking
14:25 → 14:45
A Review and Collection of Metrics and Benchmarks for Quantum Computers: definitions, methodologies and software
Deep Lall, National Physical Laboratory
14:45 → 15:05
Benchmarking progress towards quantum utility
Timothy Proctor, Sandia Lab
15:05 → 15:25
Quantum Optimization Benchmark Library – The Intractable Decathlon
Stefan Woerner, IBM
15h25 → 16h05
B2 – Software benchmarking
15:25 → 15:45
Benchmarking quantum error-correcting codes using Monte Carlo sampling
Michael Vasmer, INRIA
15:45 → 16:05
Benchmarking the performance of quantum computing software
Paul Nation, IBM
16:05 → 16:25
COFFEE BREAK
16h25 → 18h35
B3 – Application benchmark initiatives
16:25 → 16:45
Open QBench: An Application-Driven Perspective on Quantum Computing Benchmarks
Konrad Wojciechowski, PCSS
16:45 → 17:05
Hybrid Quantum-Classical Benchmarking – Assessing workflows that combine quantum and classical computation.
Jeannette Lorenz, Fraunhofer IKS
17:05 → 17:25
Noise tailoring for error mitigation and for diagnosing quantum computers
Kyrylo Snizhko, CEA
17:25 → 17:45
Many-body Quantum Score: A scalable benchmark for digital and analog QPUs and first results on a Pasqal device
Harold Erbin, IPHT
17:45 → 18:05
Evaluating the performance of quantum processing units at large width and depth
Jhon Alejandro Montanez-Barrera, Jülich
18:05 → 18:20
Efficient Benchmarking with Provable Guarantees and more
Sami Abdul Sater, INRIA
18:20 → 18:35
Using Quantum Computing to Accelerate Simulation
Prith Banerje, Ansys
→ 18:35
End of day 1
The Maximum Independent Set (MIS) problem is a fundamental combinatorial optimization task that can be naturally mapped onto the Ising Hamiltonian of neutral atom quantum processors. Given its connection to NP-hard problems and real-world applications, there has been significant experimental interest in exploring quantum advantage for MIS.
Pioneering experiments on King’s Lattice graphs suggested a quadratic speed-up over simulated annealing, but recent benchmarks using state-of-the-art methods found no clear advantage, likely due to the structured nature of the tested instances. In this work, we generate hard instances of unit-disk graphs by leveraging complexity theory results and varying key hardness parameters such as density and treewidth.
For a fixed graph size, we show that increasing these parameters can lead to prohibitive classical runtime increases of several orders of magnitude. We then compare classical and quantum approaches on small instances and find that, at this scale, quantum solutions are slower than classical ones for finding exact solutions.
Based on extended classical benchmarks at larger problem sizes, we estimate that scaling up to a thousand atoms with a 1 kHz repetition rate is a necessary step toward demonstrating a computational advantage with quantum methods.
As quantum processors continue to grow in qubit count and circuit depth, benchmarking the quantum algorithms run on them on classical backends became more and more relevant, preparing the stage for large-scale quantum experiments.
In this talk I will present the quantum information R&D carried out at Quobly (Grenoble), focusing on the use of tensor networks. In particular, we will analyze how we could make use of the latest advances in the field to tackle different quantum algorithms, from the simplest to the most challenging. We will emphasize in the use of Quimb, and compare different computational costs (time complexities) depending on the set of chosen strategies and hyperparameters.
We are conducting JHPC Quantum project to design and build a quantum-supercomputer hybrid computing platform by integrating different kinds of on-premises quantum computers, IBM superconducting quantum computer and Quantinuum trapped-ion quantum computer, with
several supercomputers including Fugaku. Our recent research activities for Utility-scale QC-HPC hybrid computing and QC-HPC hybrid programming models will be presented, followed by some idea about benchmarking QC-HPC hybrid systems.
Benchmarking, and KPI evaluations, are major pillars in the HPC culture. As it comes to evaluate QC, many new aspects aare to be considered.
But when it comes to think about evaluating a hybrid HPC/QC platform, the situation can become pretty complex and lead to potential loopholes.
This topic consideres this questions from both HPC and QC perspectives in order to try to bring a view about benchmarks related ato hybrid HPC/QC platforms
Through recent progress in hardware development, quantum computers have advanced to the point where benchmarking of (heuristic) quantum algorithms at scale is within reach. Particularly in combinatorial optimization — where most algorithms are heuristics — it is key to empirically analyze their performance on hardware and track progress towards quantum advantage.
To this extent, we present ten optimization problem classes that are difficult for existing classical algorithms and can (mostly) be linked to practically-relevant applications, with the goal to enable systematic, fair, and comparable benchmarks for quantum optimization methods.
Further, we introduce the Quantum Optimization Benchmark Library (QOBLIB) where the problem instances and solution track records can be found. The individual properties of the problem classes vary in terms of objective and variable type, coefficient ranges, and density. Crucially, they all become challenging for established classical methods already at system sizes ranging from less than 100 to, at most, an order of 100,000 decision variables, allowing to approach them with today’s quantum computers.
We reference the results from state-of-the-art solvers for instances from all problem classes and demonstrate exemplary baseline results obtained with quantum solvers for selected problems.
The baseline results illustrate a standardized form to present benchmarking solutions, which has been designed to ensure comparability of the used methods, reproducibility of the respective results, and trackability of algorithmic and hardware improvements over time.
We encourage the optimization community to explore the performance of available classical or quantum algorithms and hardware platforms with the benchmarking problem instances presented in this work toward demonstrating quantum advantage in optimization.
Paper: https://arxiv.org/abs/2504.03832
Repository: https://git.zib.de/qopt/qoblib-quantum-optimization-benchmarking-library
Quantum simulators and computers promise quantum advantages in certain computational tasks. However, progress in this area is only plausible if, alongside the development of quantum hardware, methods for benchmarking are also developed.
In this brief talk, we will approach the topic from five different perspectives, developing a line of thought around five ‘theses’: 1. Robustness and sample efficiency are key [1-5]. 2. Analog devices are underappreciated [6,7]. 3. Sampling quantum advantages can be directly benchmarked [8]. 4. We must consider the entire food chain in benchmarking [9]. 5. A ‘data-driven’ view of nature leads to surprising results. In the outlook, we will put these lines of thought into perspective.
[1] PRX Quantum 3, 020357 (2022).
[2] Nature Comm. 14, 5039 (2023).
[3] arXiv:2403.04751 (2024).
[4] Phys. Rev. Lett. 133, 020602 (2024).
[5] arXiv:2405.06544 (2024).
[6] Nature Comm. 15, 9595 (2024).
[7] Phys. Rev. Lett. 133, 240604 (2024).
[8] Nature Comm. 16, 106 (2025).
[9] arXiv:2410.18196 (2024).
As quantum computers continue to advance, assessing their computational power requires going beyond low-level benchmarks and utilizing quantum algorithms and applications to properly gauge performance.
Such benchmarks necessarily intertwine the performance of quantum computing hardware with that of the software used to generate the input quantum circuits and postprocess the results.
In this talk, I will discuss our work on benchmarking mainstream software development kits (SDK) for quantum circuit creation, manipulation, and optimization.
The results highlight the performance differences amongst SDKs over key metrics, and demonstrate how the results obtained from high-level benchmarks cannot be attributed to quantum hardware alone.
Error mitigation (EM) methods are crucial for obtaining reliable results in the realm of noisy intermediate-scale quantum (NISQ) computers, where noise significantly impacts output accuracy. Some EM protocols are particularly efficient for specific types of noise. Yet the noise in the actual hardware may not align with that. I will introduce « noise-tailoring » (NT) — an innovative strategy designed to modify the structure of the noise associated with two-qubit gates through statistical sampling.
I will discuss its application for IBM’s quantum computers. While classical emulations predict that NT+EM result can be up to an order of magnitude more accurate than the result of EM alone, the runs on actual quantum computers do not show such an improvement.
A detailed analysis of the above discrepancy leads to insights into the nature of noise on real-life quantum computers. This makes the NT technique an important instrument for diagnosing noise of quantum computers.
Quantum computers have now surpassed classical simulation limits, yet noise continues to limit their practical utility. As the field shifts from proof-of-principle demonstrations to early deployments, there is no standard method for meaningfully and scalably comparing heterogeneous quantum hardware.
Existing benchmarks typically focus on gate-level fidelity or constant-depth circuits, offering limited insight into algorithmic performance at depth.
Here we introduce a benchmarking protocol based on the linear ramp quantum approximate optimization algorithm (LR-QAOA), a fixed-parameter, deterministic variant of QAOA.
LR-QAOA quantifies a QPU’s ability to preserve a coherent signal as circuit depth increases, identifying when performance becomes statistically indistinguishable from random sampling.
Computer aided engineering simulation tools are used to design products in different industries.
These simulation tools while accurate often take long computing times. In this talk we will describe how we are exploring the use of quantum computing to provide exponential speedups to these CAE simulations.
We will present our work on (1) Quantum Lattice Boltzmann method for fluids simulations (2) Hamiltonian methods for general CAE simulation (3) Graph partitioning methods as part of a workflow for LS-DYNA crash simulations.
We propose the Many-body Quantum Score (MBQS), a practical and scalable application-level benchmark protocol designed to evaluate the capabilities of quantum processing units (QPUs) – both gate-based and analog – for simulating many-body quantum dynamics. MBQS quantifies performance by identifying the maximum number of qubits with which a QPU can reliably reproduce correlation functions of the transverse-field Ising model following a specific quantum quench.
In this talk, I will present the MBQS protocol and highlight its design principles, supported by analytical insights, classical simulations, and experimental data. I will also share preliminary results obtained with Ruby, an analog QPU based on Rydberg atoms developed by Pasqal, which is soon to be deployed at the TGCC. These findings demonstrate MBQS’s potential as a robust and informative tool for benchmarking near-term quantum devices for many-body physics.
Distributed quantum architectures hold the promise of shortening the pathway towards scalable quantum architectures.
This talks explores the computational challenges and resource constraints related to distributed quantum computing, and details the compilation strategies and benchmarks currently being developed to enable quantum computing on interconnected quantum computers.
What are the current constraints and what does an industrial like a banking group need to embrace quantum computing? Seen from a user perspective.
As quantum computers continue to be integrated into high-performance computing (HPC) centers, there is a growing consensus across the community on the need for well-defined, comprehensive, and standardized performance benchmarks.
In this talk, I will share a perspective on quantum benchmarking from the standpoint of an HPC and QC provider. I will introduce the Open QBench suite and toolbox, which aims to advance reliable benchmarking practices by offering common interfaces and methods for metric comparison.
As quantum machines are becoming available it is important to start building trust across all stakeholders — from component manufacturers to system builders, integrators and end-users — in a way that allows to measure the progress on the path to actually successfully solving practical use-cases. To that end, reliable, meaningful and scalable benchmarks are needed.
In this talk we present a benchmarking protocol that fulfills these characteristics. Originating from a verification framework, we show how cryptographic techniques allows to obtain rigorous guarantees with precise control over the hypothesis made.
The rapid pace of development in quantum computing technology has sparked a proliferation of benchmarks to assess the performance of quantum computing hardware and software.
However, not all benchmarks are of equal merit. Good benchmarks empower scientists, engineers, programmers and users to understand the power of a computing system, whereas bad benchmarks can misdirect research and inhibit progress.
In this talk, I will discuss how good benchmarks can drive and measure progress towards the long-term goal of useful quantum computations, known as quantum utility.
I will discuss the general concept of capability benchmarking, which are benchmarks that measure circuit-execution capabilities and compare those capabilities to requirements for solving challenge problems.
I will then briefly discuss some techniques for designing specific capability benchmarks, including a technique based on running subcircuits sampled from a full-scale quantum algorithm, and I will touch on how to integrate the cost of error mitigation into benchmarking results.
Although quantum hardware makes significant progress in the direction of both scalability and fault-tolerance, it becomes increasingly clear that quantum computers will also work together with classical computers in the future. Quantum computers will enter as quantum accelerators into a more complex quantum-classical workflow, where they are expected to handle complex computational tasks intractable for classical computers.
When looking at the resulting hybrid quantum-classical workflows, it turns out that the quantum and classical parts are very much interconnected – e.g., the appearance of the cost landscape created by a quantum variational algorithm may require dedicated classical optimizers. Given this complexity of quantum-classical workflows, the question arises on how they could be characterized in their performance and benchmarked. This talk will highlight this in two examples: on generic variational algorithms and on a concrete quantum-classical convolutional neural network.
The simulation of many-body quantum physics systems is one of the simplest tasks that quantum computers can perform exponentially faster than classical computers, and is expected to be some of the earliest applications.
However, benchmarking Hamiltonian simulation on quantum computers raises specific problems. There is indeed generically no classical data to compare with for large and deep enough circuits, and scalable techniques such as mirror circuits tend to be significantly more sensitive to hardware noise than local observables.
We will present a scalable and application-oriented benchmark protocol for Hamiltonian simulation, based on the classical simulability of non-interacting fermionic particles
We introduce a metric to evaluate the quality of an output of a quantum computer, as the time that a perfect quantum computer would take to detect a bias in the output.
We will present implementation of the benchmark on a Quantinuum trapped-ion device.