This summer school proposes an introduction to high-performance computing and its use in applied mathematics and physics. It covers several new technics, like quantum computing or GPU programming.
The summer school is composed of five lectures and three talks. Each lecture (2x1h30) will be complemented by 2 hours of tutorials. It is primarily intended for the students of the graduate program "Mathematics and interactions: research and interactions" of the University of Strasbourg but is also accessible to any interested PhD student or researcher. As space is limited, priority will be given to PhD students. This event is supported by the Interdisciplinary Thematic Institute IRMIA++.
It will take place from 26 to 30 August in the IRMA conference room at the University of Strasbourg.
Lectures
Introduction to quantum computing, Miriam Backens (INRIA, Université de Lorraine)
Quantum computers use effects from quantum physics to solve certain computational problems more efficiently. In this course, we will introduce relevant concepts from quantum physics such as entanglement and superposition, and explore why quantum mechanics seems to be difficult to simulate on a non-quantum computer. We will look at some examples of quantum algorithms and also at the challenges and limits of quantum computing.
Introduction to high-performance computing, Matthieu Boileau (CNRS, Université de Strasbourg)
High-performance computing has offered extraordinary capabilities to scientists and engineers for solving complex problems. This course provides an introduction to the basic concepts of parallel architecture, parallel models, and their programming techniques. Each concept will be illustrated with straightforward examples and practical exercises using Python.
Kernel methods on GPUs and applications, Benjamin Charlier (Université de Montpellier)
Kernel methods are ubiquitous in many fields of data science, including statistics and machine learning. They can model datasets where particles interact, for example, through a distance or covariance matrix. However, computations involved in kernel methods can be heavy and challenging to scale to real data. In this session, I will present the KeOps library, which enables the efficient computation of arbitrary operations involving M×N pairwise interactions between M "source" and N "target" data points. KeOps leverages CPU or GPU parallelization and automatic differentiation to achieve this efficiency. It avoids creating unnecessary temporary quadratic matrices (M×N) for common operations such as kernel convolutions or nearest neighbor searches. KeOps can be used almost seamlessly through NumPy, PyTorch, or R bindings. Depending on the audience, we may use examples from machine learning, statistics, or physics to illustrate these methods.
Scheduling on heterogeneous machines, Lionel Eyraud-Dubois (INRIA, Université de Bordeaux)
This course will describe techniques and algorithms for scheduling applications on current heterogeneous machines, taking advantage of both CPUs and GPUs accelerators. We will introduce a task-based programming paradigm to easily describe the structure of an application. On the theoretical side, we will present several popular scheduling algorithms and prove guarantees on their performance. On the practical side, we will discuss common heuristics and provide hands-on experiments to use them in scientific applications.
Parallel-in-time numerical methods and applications, Sever Hirstoaga (INRIA, Paris)
Talks
Parallelization and optimization in the Computational Fluid Dynamics context, Philippe Pernaudeau (Université de Poitiers)
Urban building project, Christophe Prud'homme (Université de Strasbourg)