NOVEMBER, 14th 2024
Algorithms Session | Sensors Session
09:30 → 10:30
Quantum algorithms in the NISQ era
09:30 → 10:00
Probing many-body quantum states in large-scale experiments with an upgraded randomized measurements toolbox
Benoit Vermersch, QUOBLY
10:00 → 10:30
10:30 → 11:00
COFFEE BREAK
11:00 → 14:40
Towards industrial applications of quantum algorithms
11:00 → 11:30
Designing quantum algorithms for the electronic structure of strongly correlated systems
Matthieu Saubanère, CNRS Research Fellow, Bordeaux (LOMA)
11:30 → 12:00
Quantum computing for optimization problems
Philippe Lacomme, associate professor at the University of Clermont Auvergne (LIMOS)
12:00 → 12:20
12:20 → 12:40
Quantum Algorithms for Distributed Quantum Computing
Ioannis Lavdas, Welinq
12:40 → 14:00
LUNCH
14:20 → 14:40
Quantum Computing for Partition Function Estimation of a Markov Random Field in a Radar Anomaly Detection Problem
Timothé PRESLES, Thales Defense Mission Systems
14:40 → 15:40
Mitigation of noise : quantum error correction
14:40 → 15:10
From cat qubit to large scale fault-tolerant quantum computer: Alice & Bob’s approach
Élie Gouzien, Alice & Bob
15:10 → 15:40
Algorithmic Fault Tolerance for Fast Quantum Computing
Chen Zhao, Quera
15:40 → 16:10
COFFEE BREAK
16:10 → 17:40
Quantum Machine Learning
16:10 → 16:30
Unsupervised Feature Selection Using Gaussian Boson Sampling
Jesua Epequin, EDF China R&D
16:30 → 16:50
Subspace Preserving Quantum Machine Learning Algorithms
Léo Monbroussou, Naval group/LIP6
17:10 → 17:40
Understanding the role of data and learning through a quantum lens
Jarrod MCClean, Google
As quantum training is flourishing in French Universities, several challenges arise. It is well-known that the hype around Quantum sometimes undermines the seriousness of quantum science.
However, some universities have developed worldwide expertise for decades on the topic, which makes France one of the leading nations. This conference aims at deciphering the current state of quantum training in France.
The randomized measurements toolbox is now routinely used in quantum processors to estimate fundamental quantum properties, such as entanglement [1,2].
While experimentalists appreciate the simplicity and robustness aspects of such measurement protocols, one challenge for theorists is to devise approaches for overcoming statistical errors using `cheap’ polynomial resources in system size.
In this context, I will present recent upgrades to the randomized measurements toolbox that address this challenge for large-scale quantum states that are relevant to quantum simulation. In particular, I will discuss efficient protocols for measuring entanglement[3], performing tensor-network tomography[4], measuring noise models[5,6], and present experimental demonstrations on large-scale QPUs[4,6].
[1] A. Elben, S. T. Flammia, H.-Y. Huang, R. Kueng, J. Preskill, B. Vermersch, and P. Zoller, The Randomized Measurement Toolbox, Nat Rev Phys 5, 9 (2022). [2] T. I. Andersen et al., Thermalization and Criticality on an Analog-Digital Quantum Simulator, arXiv:2405.17385. [3] B. Vermersch, M. Ljubotina, J. I. Cirac, P. Zoller, M. Serbyn, and L. Piroli, Many-Body Entropies and Entanglement from Polynomially Many Local Measurements, Phys. Rev. X 14, 031035 (2024). [4] M. Votto et al, in preparation. [5] D. Stilck França, L. A. Markovich, V. V. Dobrovitski, A. H. Werner, and J. Borregaard, Efficient and Robust Estimation of Many-Qubit Hamiltonians, Nat Commun 15, 311 (2024). [6] M.Joshi et al, in preparation.
Textbook quantum algorithms for materials science and chemistry require resource-heavy algorithms that are not suited to current processors.
In this presentation, I will present three hybrid quantum classical methods for materials science and chemistry that target current quantum processors.
The first method [1] will rely on orbital optimization to improve variational quantum eigensolver methods. The second [2] will use a slave-particle mapping to study Hubbard-like models with analog Rydberg quantum processors. The third [3] will use a combination of tensor networks and quantum circuits to study quench dynamics.
[1] Compact fermionic quantum state preparation with a natural-orbitalizing variational quantum eigensolving scheme, P. Besserve, M. Ferrero, and T. Ayral, arXiv: 2406.14170.
[2] Hubbard physics with Rydberg atoms: using a quantum spin simulator to simulate strong fermionic correlations, Antoine Michel, Loïc Henriet, Christophe Domain, Antoine Browaeys, Thomas Ayral, Phys. Rev. B 109, 174409
[3] Combining Matrix Product States and Noisy Quantum Computers for Quantum Simulation, Baptiste Anselme Martin, Thomas Ayral, François Jamet, Marko J. Rančić, Pascal Simon, Phys. Rev. A 109, 062437
Quantum HF lines & harness is an important part of the QC enabling technologies due to the exponential growth of the number of Q-bit per cryostat (system scale-up) that impacts the overall performance of the system. These microwave links include Cables/Assemblies, Connector, Attenuator, Board to board and switches with specific requirements for each stage of the thermal dilution refrigerator.
To manage the scale up of the QC, high density harnesses are required like side loaders to support hundreds of lines.
Strongly correlated systems pose significant challenges for traditional electronic structure methods due to the complexity of electron-electron interactions.This presentation explores quantum algorithms designed to address these challenges, their limitations, and the strategies developed to overcome them. In particular, we focus on variational approaches, such as the Variational Quantum Eigensolver (VQE), which provide a practical solution for near-term, noisy quantum computers. To enhance efficiency, these algorithms are often combined with embedding techniques like Density Matrix Embedding Theory (DMET), which reduce the problem by breaking it down into a collection of computationally tractable sub-problems. We will discuss how this combination of quantum algorithms and embedding methods offers a promising pathway for accurately studying strongly correlated systems.
Discrete optimization problems are a very important application area of quantum technologies, as the majority of optimization problems are highly combinatorial, making enumeration techniques ineffective.
We will attempt to highlight the key points to exploit in order to leverage the theoretical and practical results from the field of Operations Research within the context of quantum resolution.
Simulating the contribution to a desired response of rare events in Monte Carlo simulations is always limited by a priori knowledge about the system studied. In order to get an estimate of the response with a reasonable uncertainty, one must resort to variance reduction algorithms. Importance Sampling (IS) and Adaptive Multilevel Splitting (AMS) are algorithms that fall into this category. Their usage has settled in many domains and represent the most efficient (IS) and safe (AMS) strategies to sample reactive trajectories – in the sense that they contribute effectively to the desired response of interest. We investigate the extension to a quantum formalism of IS and AMS inspired by particle transport thus enabling current proposals to resort to external a priori information of the system studied: the importance map/the value function.
Distributed Quantum Computing (DQC) presents a promising approach to overcoming scalability challenges by leveraging a network of interconnected quantum processing units (QPUs). This talk will introduce the foundational principles of quantum algorithm distribution across multiple QPUs, highlighting the key communication protocols essential for coordinating operations between distant nodes as well as the application of these methods on certain key quantum algorithms. This exploration underscores DQC’s potential as a scalable solution for future quantum systems.
Classical crack opening simulation methods simulate up to 10^12 degrees of freedom to extract a single scalar observable like the stress intensity factor. We present an alternative in a relevant 2D case using a parametrized quantum circuit, storing nodal displacements as amplitudes and extracting relevant observables. Nodal displacements are obtained by minimizing elastic energy via noise-resistant algorithms. We proved that computing the expectation value of the elastic energy requires only a polylogarithmic number of measurements. We run simulations on Qiskit and Perceval, thus comparing both photonic and transmon-based quantum computers. Doing so, we set a benchmark for several real-valued Ansätze in noise-free and NISQ contexts.
In probability theory, the partition function is a factor used to reduce any probability function to a density function with total probability of one. Among other statistical
models used to represent joint distribution, Markov random fields (MRF) can be used to efficiently represent statistical dependencies between variables. As the number of terms in the partition function scales exponentially with the number of variables, the potential of each configuration cannot be computed exactly in a reasonable time for large instances. In this paper, we aim to take advantage of the exponential scalability of quantum computing to speed up the estimation of the partition function of a MRF
representing the dependencies between operating variables of an airborne radar. For that purpose, we implement a quantum algorithm for partition function estimation in the one clean qubit model. After proposing suitable formulations, we discuss the performances and scalability of our approach in comparison to the theoretical performances of the algorithm.
The Qaptiva platform from Eviden enables the deployment of a complete quantum computing environment on an HPC cluster, with the integration of quantum processors (QPU). We will review recent deployments in national computing centers and discuss the main challenges of scaling up.
Cat qubits allow to autonomously be protected against bit-flip. We will present how this specificity can be exploited to significantly reduce the overhead of error correction, paving a more scalable way toward large scale fault-tolerant quantum computing. More in detail, we will present how to correct bit-flip and apply logical gates with repetition or LDPC code.
Fast, reliable logical operations are essential for the realization of useful quantum computers, as they are required to implement practical quantum algorithms at large scale. By redundantly encoding logical qubits into many physical qubits and using syndrome measurements to detect and subsequently correct errors, one can achieve very low logical error rates. However, for most practical quantum error correcting (QEC) codes such as the surface code, it is generally believed that due to syndrome extraction errors, multiple extraction rounds — on the order of the code distance d — are required for fault-tolerant computation. We show that contrary to this common belief, fault-tolerant logical operations can be performed with constant time overhead for a broad class of QEC codes, including the surface code with magic state inputs and feed-forward operations, to achieve “algorithmic fault tolerance”. Through the combination of transversal operations and novel strategies for correlated decoding, despite only having access to partial syndrome information, we prove that the deviation from the ideal measurement result distribution can be made exponentially small in the code distance. We supplement this proof with circuit-level simulations in a range of relevant settings, demonstrating the fault tolerance and competitive performance of our approach. Our work sheds new light on the theory of quantum fault tolerance, potentially reducing the space-time cost of practical fault-tolerant quantum computation by orders of magnitude.
Integration of QC within HPC compute centers is on the way, making QC a new compute paradigm. QC offers ways to address previously unreachable problems such as NP problems. On the other hand, the integration of QC in HPC is quite challenging, requiring the development of new middleware layers.
When considering QC, challenges appears in several domains, involving system oriented features as well as high level libraries providing building blocks for end-user to build HPC/QC ready applications. In order to address this integration, middleware should be structured and interfaces should be defined.
Feature selection is a critical step in machine learning, particularly in high-dimensional datasets where redundant or irrelevant features can degrade model performance. In this paper, we propose a novel approach to unsupervised feature selection by leveraging the capabilities of Gaussian Boson Sampling (GBS). By mapping the feature set to a graph where edge weights represent mutual information, our method (Gaussian Boson Sampling Feature Selection – GBSFS) frames the feature selection task as a densest subgraph problem. GBS is employed to sample these subgraphs, which correspond to subsets of features that are minimally redundant and highly informative. We validate the effectiveness of our method through experiments on benchmark datasets, demonstrating that GBSFS can successfully identify feature subsets that improve model performance compared to traditional feature selection techniques. Our results highlight the potential of quantum computing in advancing machine learning tasks.
Current quantum devices have serious constraints, limiting the size of the circuits. Variational quantum algorithms, which use a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints, and design powerful quantum algorithms. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. Subspace preserving quantum circuits are a class of quantum algorithms that, relying on some symmetries in the computation, can offer theoretical guarantees on their training. Those models are sub-universal, and cannot offer an exponential advantage. In this talk, we will discuss how to use those methods to mimic classical neural network architectures, and how to seek potential advantages while using a particular quantum hardware.
Reservoir Computing (RC) is a machine learning paradigm with applications in time series forecasting and other memory-based problems [1]. The idea is to inject time series data collected from a source system into a physical system, to let the system react and evolve according to its natural dynamics, and to use this behaviour to make predictions. Several proposals to use quantum systems as a physical reservoir have recently attracted attention, as the compact nature of RC lends itself well to NISQ implementations [2, 3, 4]. Furthermore, the fact that no parameter updating is needed circumvents the barren plateau problem commonly encountered in Variational Quantum Circuits. An important property of a machine learning model is its ability to generalise on unseen data. A bound on the generalisation error is also called a risk bound. We establish risk bounds on subclasses of the universal quantum reservoir classes introduced in [7] and [8]. Both classes employ a multivariate polynomial readout which is used to prove universality. We find that the risk bound scales very unfavourably in the number of qubits when using a class of multivariate polynomial readouts. In other words, our bound suggests that the ability of the above classes to perform well on unseen data quickly explodes as the number of qubits increases.
[1] F. M. Bianchi, S. Scardapane, S. Lokse and R. Jenssen. Reservoir computing approaches for representation and classification of multivariate time series In IEEE Transactions on Neural Networks and Learning Systems, 32(5), 2021.
[2] K. Fujii and K. Nakajima. Harnessing disordered ensemble quantum dynamics for machine learning In Phys. Rev. Applied, 8(2), 2016.
[3] Y. Suzuki, Q. Gao, K. C. Pradel, K. Yasuoka and N. Yamamoto. Natural quantum reservoir computing for temporal information processing In Scientific Reports, 12(1), 2022.
[4] T. Yasuda, Y. Suzuki, T. Kubota, K. Nakajima, Q. Gao, W. Zhang, S. Shimono, H. I. Nurdin and N. Yamamoto. Quantum reservoir computing with repeated measurements on superconducting devices In arXiv preprint, 12(1), 2022.
[7] J. Chen and H. I. Nurdin. Learning Nonlinear Input-Output Maps with Dissipative Quantum Systems In Quantum Information Processing, 18(7), 2019.
[8] J. Chen, H. I. Nurdin and N. Yamamoto. Temporal information processing on noisy quantum computers In Phys. Rev. Applied, 14(2), 2020.
As quantum technology continues to rapidly advance, it is interesting to stop and ask what it has already taught us about how we do science. If we believe both that quantum computers may be able to do some computations exponentially faster than their classical counterparts and that we live in a quantum world, then our ability to learn from observational data as scientists may fundamentally change what we can do. I will first review some recent results in quantum machine learning that allow us to put ideas about learning from the physical world on a rigorous footing and show that quantum computers, and more specifically quantum memory, offer us an opportunity to learn from a quantum world with exponentially less data than traditional experiments. More recently, we have come to understand opportunities for advantage outside of just time complexity and consider what this means for the role of classical data in quantum learning. The connection of this work to new ways to learn quantum circuits could prove fruitful in understanding the role of quantum computers learning hard distributions in generative learning.