NOVEMBER, 14th 2024
Algorithms Session | Sensors Session
08:30 → 09:00
WELCOME COFFEE
09:00 → 09:10
Introduction
Antoine Michel, EDF
09:10 → 10:30
Quantum algorithms in the NISQ era
09:10 → 09:50
Probing many-body quantum states in large-scale experiments with an upgraded randomized measurements toolbox
Benoit Vermersch, associate professor at the University of Grenoble Alpes, member of the (LPMMC)
09:50 → 10:30
Fine tuning in digital approach NISQ
to be announced soon
10:30 → 11:00
COFFEE BREAK
11:10 → 12:50
Towards industrial applications of quantum algorithms
11:00 → 11:40
Designing quantum algorithms for the electronic structure of strongly correlated systems
Matthieu Saubanère, CNRS Research Fellow, Institut Charles Gerhardt Montpellier (ICGM)
11:40 → 12:20
Quantum computing for optimization problems
Philippe Lacomme, associate professor at the University of Clermont Auvergne (LIMOS)
12:50 → 14:00
LUNCH
14:00 → 15:20
Mitigation of noise : quantum error correction
14:00 → 14:40
From cat qubit to large scale fault-tolerant quantum computer: Alice & Bob’s approach
Élie Gouzien, Alice & Bob
14:40 → 15:20
Algorithmic Fault Tolerance for Fast Quantum Computing
Chen Zhao, Research Scientist, QuEra Computing Inc
15:20 → 16:00
COFFEE BREAK
16:00 → 17:40
Quantum Machine Learning
16:00 → 16:30
Subspace Preserving Quantum Machine Learning Algorithms
Léo Monbroussou, Naval group/LIP6
17:00 → 17:40
QML at google
Jarrod MCClean, Google
As quantum training is flourishing in French Universities, several challenges arise. It is well-known that the hype around Quantum sometimes undermines the seriousness of quantum science. However, some universities have developed worldwide expertise for decades on the topic, which makes France one of the leading nations. This conference aims at deciphering the current state of quantum training in France.
The randomized measurements toolbox is now routinely used in quantum processors to estimate fundamental quantum properties, such as entanglement [1,2]. While experimentalists appreciate the simplicity and robustness aspects of such measurement protocols, one challenge for theorists is to devise approaches for overcoming statistical errors using `cheap’ polynomial resources in system size. In this context, I will present recent upgrades to the randomized measurements toolbox that address this challenge for large-scale quantum states that are relevant to quantum simulation. In particular, I will discuss efficient protocols for measuring entanglement[3], performing tensor-network tomography[4], measuring noise models[5,6], and present experimental demonstrations on large-scale QPUs[4,6]. [1] A. Elben, S. T. Flammia, H.-Y. Huang, R. Kueng, J. Preskill, B. Vermersch, and P. Zoller, The Randomized Measurement Toolbox, Nat Rev Phys 5, 9 (2022). [2] T. I. Andersen et al., Thermalization and Criticality on an Analog-Digital Quantum Simulator, arXiv:2405.17385. [3] B. Vermersch, M. Ljubotina, J. I. Cirac, P. Zoller, M. Serbyn, and L. Piroli, Many-Body Entropies and Entanglement from Polynomially Many Local Measurements, Phys. Rev. X 14, 031035 (2024). [4] M. Votto et al, in preparation. [5] D. Stilck França, L. A. Markovich, V. V. Dobrovitski, A. H. Werner, and J. Borregaard, Efficient and Robust Estimation of Many-Qubit Hamiltonians, Nat Commun 15, 311 (2024). [6] M.Joshi et al, in preparation.
Quantum HF lines & harness is an important part of the QC enabling technologies due to the exponential growth of the number of Q-bit per cryostat (system scale-up) that impacts the overall performance of the system. These microwave links include Cables/Assemblies, Connector, Attenuator, Board to board and switches with specific requirements for each stage of the thermal dilution refrigerator.
To manage the scale up of the QC, high density harnesses are required like side loaders to support hundreds of lines.
Strongly correlated systems pose significant challenges for traditional electronic structure methods due to the complexity of electron-electron interactions.This presentation explores quantum algorithms designed to address these challenges, their limitations, and the strategies developed to overcome them. In particular, we focus on variational approaches, such as the Variational Quantum Eigensolver (VQE), which provide a practical solution for near-term, noisy quantum computers. To enhance efficiency, these algorithms are often combined with embedding techniques like Density Matrix Embedding Theory (DMET), which reduce the problem by breaking it down into a collection of computationally tractable sub-problems. We will discuss how this combination of quantum algorithms and embedding methods offers a promising pathway for accurately studying strongly correlated systems.
Classical crack opening simulation methods simulate up to 10^12 degrees of freedom to extract a single scalar observable like the stress intensity factor. We present an alternative in a relevant 2D case using a parametrized quantum circuit, storing nodal displacements as amplitudes and extracting relevant observables. Nodal displacements are obtained by minimizing elastic energy via noise-resistant algorithms. We proved that computing the expectation value of the elastic energy requires only a polylogarithmic number of measurements. We run simulations on Qiskit and Perceval, thus comparing both photonic and transmon-based quantum computers. Doing so, we set a benchmark for several real-valued Ansätze in noise-free and NISQ contexts.
Cat qubits allow to autonomously be protected against bit-flip. We will present how this specificity can be exploited to significantly reduce the overhead of error correction, paving a more scalable way toward large scale fault-tolerant quantum computing. More in detail, we will present how to correct bit-flip and apply logical gates with repetition or LDPC code.
Fast, reliable logical operations are essential for the realization of useful quantum computers, as they are required to implement practical quantum algorithms at large scale. By redundantly encoding logical qubits into many physical qubits and using syndrome measurements to detect and subsequently correct errors, one can achieve very low logical error rates. However, for most practical quantum error correcting (QEC) codes such as the surface code, it is generally believed that due to syndrome extraction errors, multiple extraction rounds — on the order of the code distance d — are required for fault-tolerant computation. We show that contrary to this common belief, fault-tolerant logical operations can be performed with constant time overhead for a broad class of QEC codes, including the surface code with magic state inputs and feed-forward operations, to achieve “algorithmic fault tolerance”. Through the combination of transversal operations and novel strategies for correlated decoding, despite only having access to partial syndrome information, we prove that the deviation from the ideal measurement result distribution can be made exponentially small in the code distance. We supplement this proof with circuit-level simulations in a range of relevant settings, demonstrating the fault tolerance and competitive performance of our approach. Our work sheds new light on the theory of quantum fault tolerance, potentially reducing the space-time cost of practical fault-tolerant quantum computation by orders of magnitude.
Current quantum devices have serious constraints, limiting the size of the circuits. Variational quantum algorithms, which use a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints, and design powerful quantum algorithms. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. Subspace preserving quantum circuits are a class of quantum algorithms that, relying on some symmetries in the computation, can offer theoretical guarantees on their training. Those models are sub-universal, and cannot offer an exponential advantage. In this talk, we will discuss how to use those methods to mimic classical neural network architectures, and how to seek potential advantages while using a particular quantum hardware.
Reservoir Computing (RC) is a machine learning paradigm with applications in time series forecasting and other memory-based problems [1]. The idea is to inject time series data collected from a source system into a physical system, to let the system react and evolve according to its natural dynamics, and to use this behaviour to make predictions. Several proposals to use quantum systems as a physical reservoir have recently attracted attention, as the compact nature of RC lends itself well to NISQ implementations [2, 3, 4]. Furthermore, the fact that no parameter updating is needed circumvents the barren plateau problem commonly encountered in Variational Quantum Circuits. An important property of a machine learning model is its ability to generalise on unseen data. A bound on the generalisation error is also called a risk bound. We establish risk bounds on subclasses of the universal quantum reservoir classes introduced in [7] and [8]. Both classes employ a multivariate polynomial readout which is used to prove universality. We find that the risk bound scales very unfavourably in the number of qubits when using a class of multivariate polynomial readouts. In other words, our bound suggests that the ability of the above classes to perform well on unseen data quickly explodes as the number of qubits increases.
[1] F. M. Bianchi, S. Scardapane, S. Lokse and R. Jenssen. Reservoir computing approaches for representation and classification of multivariate time series In IEEE Transactions on Neural Networks and Learning Systems, 32(5), 2021.
[2] K. Fujii and K. Nakajima. Harnessing disordered ensemble quantum dynamics for machine learning In Phys. Rev. Applied, 8(2), 2016.
[3] Y. Suzuki, Q. Gao, K. C. Pradel, K. Yasuoka and N. Yamamoto. Natural quantum reservoir computing for temporal information processing In Scientific Reports, 12(1), 2022.
[4] T. Yasuda, Y. Suzuki, T. Kubota, K. Nakajima, Q. Gao, W. Zhang, S. Shimono, H. I. Nurdin and N. Yamamoto. Quantum reservoir computing with repeated measurements on superconducting devices In arXiv preprint, 12(1), 2022.
[7] J. Chen and H. I. Nurdin. Learning Nonlinear Input-Output Maps with Dissipative Quantum Systems In Quantum Information Processing, 18(7), 2019.
[8] J. Chen, H. I. Nurdin and N. Yamamoto. Temporal information processing on noisy quantum computers In Phys. Rev. Applied, 14(2), 2020.
The Qaptiva platform from Eviden enables the deployment of a complete quantum computing environment on an HPC cluster, with the integration of quantum processors (QPU). We will review recent deployments in national computing centers and discuss the main challenges of scaling up.
Our presentation will address the challenge of building a SaaS platform that offers quantum algorithms. Simply put, we face three key challenges in scaling up:
The challenge of market relevance, in a landscape where hardware maturity has not yet been achieved.
The challenge of skills, both in theoretical aspects (mathematics, data science, and quantum) and in navigating the diverse technological offerings, each with its own development environment.
The challenge of industrialization, as we must provide a “”as-a-service”” platform capable of functioning in “”production,”” even though most quantum capabilities currently do not offer service-level guarantees.
It is a difficult and ambitious challenge but very exciting, and we have strategies to address these different aspects.
Integration of QC within HPC compute centers is on the way, making QC a new compute paradigm. QC offers ways to address previously unreachable problems such as NP problems. On the other hand, the integration of QC in HPC is quite challenging, requiring the development of new middleware layers.
When considering QC, challenges appears in several domains, involving system oriented features as well as high level libraries providing building blocks for end-user to build HPC/QC ready applications. In order to address this integration, middleware should be structured and interfaces should be defined.