Programme
JUNE, 5 2024
Quantum technologies initiatives from the European Commission
Oscar DIEZ,Commission Européenne, Head of Sector Quantum Computing
EuroHPC – Hybrid HPC-QC from a policy maker perspective
René CHATWELL, EuroHPC
The Department of Homeland Security: Developing a Quantum State of Mind
Amy HENNINGER, DHS S&T, Senior Advisor and Branch Chief for Advanced Computing
Comparing discrete optimization solvers – how to make a fair comparison
Franck PHILLIPSON, NATO
Read more / hide text
NATO has initiated a series of workgroups focusing on the burgeoning realm of quantum applications, seeking to harness the transformative potential of quantum technologies across various domains. Among these, a prominent initiative revolves around the exploration of quantum computing applications within the sphere of situational awareness—a domain where rapid and optimal decision-making based on sensor data holds paramount importance. I will tell something about this workgroup and its targets.
One of the objectives of this workgroup is the demonstration of quantum speedup (and other benefits) and its consequential benefits within the realm of situational awareness. Through rigorous experimentation and analysis, the group seeks to showcase the tangible advantages afforded by quantum computing methodologies in processing sensor information and facilitating real-time decision-making under complex and dynamic operational environments.
Of particular interest is the exploration of combinatorial optimization—a fundamental aspect of situational awareness operations. I will dive a bit into what I define to be a fair comparison of performance of the combination of (quantum) hardware and algorithms within the field of combinatorial optimization.
Benchmarking quantum computers by error correction syndrome measurements
Muhammad USMAN, CSIRO
Read more / hide text
With quantum computers rapidly approaching qualities and scales for utility and useful applications, it is important to develop new benchmarking methods to evaluate their performance in the presence of noise or errors. In this work, we have assessed the performance of IBM quantum computers through heavy-hexagon code syndrome measurements with increasing circuit sizes up to 23 qubits, against the error assumptions underpinning code threshold calculations. Data from 16 repeated syndrome measurement cycles was found to be inconsistent with a uniform depolarizing noise model, favouring instead biased and inhomogeneous noise models. These results highlight the non-trivial structure which may be present in the noise of quantum error correction circuits, revealed by operator measurement statistics, and support the development of noise-tailored codes and decoders to adapt. Our work provides crucial information about noise in the NISQ era quantum computers which will pave the way for future implementation of fault-tolerant applications.
Read more / hide text
I will review the strengths and weaknesses of leading quantum benchmarking methods, comparing randomized component level and system level vs application level approaches, and then describe a new universal approach to benchmarking that unifies these two approaches, overcomes existing limitations and paves a path to a standard benchmarking methodology that applies from current NISQ to future FT-QEC capabilities across all quantum hardware platforms.
Watch the first part of the talks
What we can learn from applications for the development of quantum computing – the German perspective
Jeanette LORENZ, Fraunhoffer – IKS
Read more / hide text
With the increasing availability of quantum computers, although still limited in the number of qubits and quality, we understand more and more which applications might or might not profit from using quantum computing.
Also, studying the potential of quantum computing for applications guides the further development of quantum hardware and software. This is further emphasized by the increasing activities towards application-centric benchmarks.
What algorithms should we study with 100 QUBITS and 1M logical gates?
Alexandru PALER, Aalto University, PI in the Darpa projects
Read more / hide text
In order to co-design algorithms with hundreds of qubits and millions of gates, one should start from the following research questions related to the execution of simpler protocols: a) how are injection protocols reflected in the decoding of correlated errors?; b) do logical qubits suffer from novel/unexpected types of errors, and if so, what is the effect of these errors on the structure of the fault-tolerance compilation primitives? c) what logical cycle times are to be expected based on the underlying architecture, and how much of an improvement is necessary for lowering the resource counts? In the first part of the talk, we will present a realistic software stack that is already available and can be used for automating the search and co-design of algorithms. In order to answer the research questions, new models, methods and (software) tools need to be researched and implemented. These aspects are discussed in the second part of the talk. The methods and results presented herein were developed partially within projects (where the speaker is a PI) funded by the DARPA Quantum Benchmarking program, QuantERA and Google.
Perspectives from IQM
Xavier GEOFFRET, IQM
Towards large-scale quantum combinatorial optimization solvers with few qubits
Leandro AOLITA, United Arab Emirates
Watch the second part of the talks
JHPC quantum project for Quantum-supercomputer hybrid computing platform
Mitsuhisa SATO, RIKEN – Japon
Read more / hide text
As the number of qubits in advanced quantum computers is getting larger over 100 qubits, demands for the integration of quantum computers and HPC are gradually growing. RIKEN R-CCS has been working on several projects to build a platform which integrates quantum computers and HPC together. Recently, we have started a new project, JHPC-quantum project, funded by NEDO with the title “Research and Development of quantum-supercomputers hybrid platform for exploration of uncharted computable capabilities”. In this project, we are going to design and build a quantum-supercomputer hybrid computing platform which integrates different kinds of quantum computers, IBM and Quantinuum, with supercomputers including Fugaku. In this talk, the overview of the JHPC-quantum projects will be presented.
QUARK – The BMW Group’s Perspective on Application-driven Benchmarking
Marvin ERDMANN, BMW Group
Read more / hide text
The Quantum Computing Application Benchmarking (QUARK) framework is an open-source project initiated in 2021 by the BMW Group. Its modular architecture facilitates the integration of new application kernels and the addition of modules on deeper layers like the classical or quantum hardware used for benchmarking runs. A growing community of developers from academia and industry is contributing to the framework and is using QUARK for research projects and to evaluate the performances of quantum algorithms and hardware for combinatorial optimization and quantum machine learning use cases. Learn more about QUARK’s application-oriented structure and how it is used for state-of-the-art quantum computing benchmarks!
Benchmarking Energy Consumption of Quantum Computers
Jose MIRALLES, CUCO
Read more / hide text
Present HPC centers are reaching their limits in computing capacity incurring huge overheads both in terms of price and energy demands. Quantum computation’s advantage is typically assessed based on computational runtimes. However, the concept of quantum advantage encompasses dimensions beyond speed, notably energy consumption. In this talk we will discuss our recent efforts in benchmarking the energy consumption of both HPC systems and superconducting-based quantum computers for various comparable tasks. We aim to consider every relevant aspect required in the execution process of both systems for a complete comparison. Part of these results have been developed in the context of Qilimanjaro Quantum Tech and Barcelona Supercomputing Center’s contribution in the Spanish CUCO project (cuco.tech/en), which aims to evaluate the capabilities of quantum computing in strategic industries, including its application in the path to a more sustainable way of doing heavy computations.
Benchmarking photonic quantum devices
Rawad MEZHER, Quandela
Read more / hide text
For near-term quantum devices, an important challenge is to develop methods to certify that noise levels are low enough to allow potentially useful applications to be carried out. In this presentation, I will discuss several such methods tailored to photonic quantum devices, and designed to assess generic performance, as well as performance of specific gates.
Perspectives from Pasqal
Louis-Paul HENRY, Pasqal
Recommendations on Quantum Computing from GIFAS Report on quantum technologies for Aerospace-Defense applications (Thales ; member of GIFAS-R&D Commission)
Frederic BARBARESCO, Thales, member of GIFAS-R&D Commission
Read more / hide text
This report was written at the request of the GIFAS R&D commission by a working group dedicated to quantum technologies and made up of representatives of main French industry players in the field, under the coordination of ONERA. This report defines the major contributions for the quantum technologies, the targeted applications from aerospace-defense sector and the associated calendar horizons as well as recommendations for the sector and for the national quantum strategy. This GIFAS R&D Commission working group led by ONERA with 14 GIFAS companies has elaborated recommendations on sensors, communication, computing and enabling technologies. We will present here main recommendations about quantum computing.
Assessing the performance of Dissipative Cat Qubits
Paul MAGNARD, Quantum Physicist, and Cécile PERRAULT, Alice&Bob
Read more / hide text
The massive hardware overhead required to implement quantum error correction remains a big roadblock towards the realization of a fault-tolerant, universal quantum computer. Bosonic codes are a promising approach to decrease significantly the hardware overhead by implementing a first layer of error correction at the physical qubit level.
In particular, dissipative cat qubits suppress bit-flip errors exponentially over orders of magnitude, while increasing phase-flip errors linearly. The average fidelity of a cat qubit is therefore degraded, yet it is estimated that this approach would allow to run Shor’s algorithm with 60 times less physical qubits than needed with the surface code. Cat qubits seem both worse and better than non-biased « standard » qubits. How can we quantitatively compare the advantage of such biased-noise ?
In his talk, we will show a method to answer this question. We show a simple model explaining how a few key cat-qubit-operation error quantities affect the logical error of a repetition code built from those operations, and how to measure them. This allows to quantitatively benchmark the quality of noise-biased cat qubits and compare it to non-biased qubits using the resulting logical error in a quantum error correction code. We will present current experimental benchmark results, and discuss the perspectives and roadblocks towards building a cat-qubit-based logical qubit under threshold.
Benchmarking quantum devices with the randomized measurement toolbox
Benoit VERMERSCH, Université Grenoble Alpes
Read more / hide text
I will review theoretical and experimental progress in developing protocols that verify quantum devices based on randomized measurements [1]. I will show that these protocols are now routinely used in various quantum processing platforms, giving in particular faithful estimations of quantum state fidelities, entanglement quantifiers, etc. I will also discuss our current efforts in developing high-quality & flexible codes for post-processing randomized measurements.
[1]: The randomized measurement toolbox. Andreas Elben, Steven T. Flammia, Hsin-Yuan Huang, Richard Kueng, John Preskill, Benoît Vermersch & Peter Zoller Nature Reviews Physics volume 5, pages 9–24 (2023)
Benchmarking the Quantinuum Stack
Daniel MILLS, QUANTINUUM, Research Scientist
Read more / hide text
We present results of, and motivations for, some of the benchmarks conducted at different layers of the Quantinuum quantum computer stack. This includes component level benchmarks, such as of the gate fidelity, and benchmarks of our enabling technology, such as the TKET compiler. We discuss results of the holistic full stack benchmarks performed, including quantum volume, and detail the standardisation efforts we have participated in and envision for the future.
Measuring Performance for Fault Tolerant Quantum Computation
Wim van DAM, Microsoft
Read more / hide text
To achieve quantum advantage of scientific and commercial importance we need fault tolerant quantum computers that can be scaled to hundreds and ultimately thousands of logical qubits. We will look at how we can estimate the quantum resources that are needed for several applications and how we measure our progress towards building a quantum supercomputer.