OCTOBER, 2nd 2024
13:30 → 13:45
WELCOME COFFEE
14:05 → 14:25
A brief history of Parallelisation in classical computing
Cyril BAUDRY, EDF Scientific Information System Architect and HPC senior expert for EDF
14:25 → 14:45
Quantum Algorithms for Distributed Quantum Computing
Ioannis LAVDAS, Senior R&D Engineer Distributed Quantum Computing (DQC), WELINQ
14:45 → 15:15
Distributed Quantum Computing in HPC environment
Andres GOMEZ, Applications and Projects Department Manager and head of Quantum research team at Galicia Supercomputing Center
15:15 → 15:45
A compiler for distributed quantum computing
Michele AMORETTI, Associate Professor of Computer Engineering at the University of Parma (Italy)
16:00 → 16:30
COFFEE BREAK
16:30 → 16:45
Smart-charging optimisation with neutral atoms
Constantin DALYAC, PASQAL
17:00 → 17:20
A litterature Review of Quantum Paralelization on the algorithm side
Christophe DURR, CNRS Researcher at Sorbonne University, LIP6 Laboratory
17:35 → 17:45
AQADOC Onboarding process
Andréa RALAMBOSON, TERATEC
17:45 → 17:55
Closing
WELINQ
After a short a definition of parallel computing, we introduce classification of computer architectures and the link with code development. At the end, we summarize current challenges and limits
Distributed Quantum Computing (DQC) presents a promising approach to overcoming scalability challenges by leveraging a network of interconnected quantum processing units (QPUs). This talk will introduce the foundational principles of quantum algorithm distribution across multiple QPUs, highlighting the key communication protocols essential for coordinating operations between distant nodes. Emphasis will be given to the pivotal role of DQC in applications to use-cases of the energy sector in the context of the AQADOC project. This exploration underscores DQC’s potential as a scalable solution for future quantum systems.
Currently, quantum computers are in early stage, mainly provided in a Cloud environment. However, the world is hybrid, mixing classical and quantum algorithms. In this hybrid paradigm, quantum computers will be another accelerator that we can coined as Quantum Procesing Unit that will be integrated in hybrid computing infrastructure. We present the current work and requirements for the integration of multiple QPUs in an HPC environment.
Most practical applications of quantum algorithms require much more qubits than those provided by current platforms. Merely augmenting the number of physical qubits on a single device is not beneficial for the quality of the computation, because of the increasing noise. Future devices will adopt quantum error correction to extract few high-quality logical qubits from many noisy physical qubits. Therefore, to supply users with many logical qubits it will be necessary to adopt the Distributed Quantum Computing (DQC) paradigm, leveraging the functionalities provided by the Quantum Internet.
DQC efficiency and effectiveness will depend also on a well-designed quantum compiler, which is responsible for finding a suitable partitioning of the quantum algorithm and then appropriately schedule remote operations to keep EPR pairs consumption to a minimum. Moreover, the quantum compiler has to compute proper local transformations for each partition.
In this talk, we present a modular quantum compilation framework for DQC that considers both network and device constraints and characteristics. We also present a prototype quantum compiler and its evaluation with quantum circuits such as VQE and QFT, taking into account different network topologies and quantum processors characterised by heavy hexagon coupling maps. For scheduling remote operations, we devised a strategy to exploit both TeleGate and TeleData operations and studied their impact.
In the last part of the talk, we shortly discuss about execution management of DQC jobs produced by the quantum compiler
Neutral atom technology has steadily demonstrated significant theoretical and experimental advancements, positioning itself as a front-runner platform for running quantum algorithms. One unique advantage of this technology lies in the ability to reconfigure the geometry of the qubit register, from shot to shot.
This unique feature makes possible the native embedding of graph-structured problems at the hardware level.
We will show how a smart-charging problem is implemented on our latest device Orion Alpha, using up to 100qbits.
One of the most promising way to scale up quantum computing is to interconnect several mid-sized quantum processing units (QPUs) in clusters. We will present the full-stack quantum link provided by WELINK to interconnect QPUs. We will discuss the scalability and adaptability of this solution and its integration in HPC environment.
In this talk we will present the state of the art of algorithmic aspects of quantum parallelization, in particular for the maximum independent set problem.
As the global leader of quantum computing, IBM is deeply engaged through its roadmap on the scaling and industrialization of our quantum computing technology.
During this presentation, we will review our current vision of the future, and share some challenges, from both hardware, software, but also adoption views.
This presentation will highlight the main challenges involved in scaling superconducting qubit quantum computers. It will explain how the IQM addresses the different aspects related to quality, quantity, and volume. And more specifically what role the future Quantum Factory in Grenoble will play in achieving these objectives.
The Qaptiva platform from Eviden enables the deployment of a complete quantum computing environment on an HPC cluster, with the integration of quantum processors (QPU). We will review recent deployments in national computing centers and discuss the main challenges of scaling up.
Our presentation will address the challenge of building a SaaS platform that offers quantum algorithms. Simply put, we face three key challenges in scaling up:
The challenge of market relevance, in a landscape where hardware maturity has not yet been achieved.
The challenge of skills, both in theoretical aspects (mathematics, data science, and quantum) and in navigating the diverse technological offerings, each with its own development environment.
The challenge of industrialization, as we must provide a “”as-a-service”” platform capable of functioning in “”production,”” even though most quantum capabilities currently do not offer service-level guarantees.
It is a difficult and ambitious challenge but very exciting, and we have strategies to address these different aspects.
Integration of QC within HPC compute centers is on the way, making QC a new compute paradigm. QC offers ways to address previously unreachable problems such as NP problems. On the other hand, the integration of QC in HPC is quite challenging, requiring the development of new middleware layers.
When considering QC, challenges appears in several domains, involving system oriented features as well as high level libraries providing building blocks for end-user to build HPC/QC ready applications. In order to address this integration, middleware should be structured and interfaces should be defined.