- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
This summer school proposes an introduction to high-performance computing and its use in applied mathematics and physics. It covers several new technics, like quantum computing or GPU programming.
The summer school is composed of five lectures and three talks. Each lecture (2x1h30) will be complemented by 2 hours of tutorials. It is primarily intended for the students of the graduate program "Mathematics and interactions: research and interactions" of the University of Strasbourg but is also accessible to any interested PhD student or researcher. As space is limited, priority will be given to PhD students. This event is supported by the Interdisciplinary Thematic Institute IRMIA++.
It will take place from 26 to 30 August in the IRMA conference room at the University of Strasbourg.
Quantum computers use effects from quantum physics to solve certain computational problems more efficiently. In this course, we will introduce relevant concepts from quantum physics such as entanglement and superposition, and explore why quantum mechanics seems to be difficult to simulate on a non-quantum computer. We will look at some examples of quantum algorithms and also at the challenges and limits of quantum computing.
High-performance computing has offered extraordinary capabilities to scientists and engineers for solving complex problems. This course provides an introduction to the basic concepts of parallel architecture, parallel models, and their programming techniques. Each concept will be illustrated with straightforward examples and practical exercises using Python.
Kernel methods are ubiquitous in many fields of data science, including statistics and machine learning. They can model datasets where particles interact, for example, through a distance or covariance matrix. However, computations involved in kernel methods can be heavy and challenging to scale to real data. In this session, I will present the KeOps library, which enables the efficient computation of arbitrary operations involving M×N pairwise interactions between M "source" and N "target" data points. KeOps leverages CPU or GPU parallelization and automatic differentiation to achieve this efficiency. It avoids creating unnecessary temporary quadratic matrices (M×N) for common operations such as kernel convolutions or nearest neighbor searches. KeOps can be used almost seamlessly through NumPy, PyTorch, or R bindings. Depending on the audience, we may use examples from machine learning, statistics, or physics to illustrate these methods.
This course will describe techniques and algorithms for scheduling applications on current heterogeneous machines, taking advantage of both CPUs and GPUs accelerators. We will introduce a task-based programming paradigm to easily describe the structure of an application. On the theoretical side, we will present several popular scheduling algorithms and prove guarantees on their performance. On the practical side, we will discuss common heuristics and provide hands-on experiments to use them in scientific applications.
Parallelization and optimization in the Computational Fluid Dynamics context, Philippe Pernaudeau (Université de Poitiers)
Urban building project, Christophe Prud'homme (Université de Strasbourg)