- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Deep learning is becoming a widely used tool in science. This summer school is dedicated to different mathematical aspects of neural networks, with a special focus on applications in computer science and astrophysics.
The summer school is composed of five lectures and three talks. Each lectures (2x1h30) will be complemented by 2 hours of tutorials.
It is primarily intended for the students of the graduate program "Mathematics and interactions : research and interactions" of the University of Strasbourg but is also accessible to any interested PhD student or researcher. No machine learning background is required to attend the summer school. This event is supported by the Interdisciplinary Thematic Institute IRMIA++.
It will take place from 29 August to 2 September in the IRMA conference room at the University of Strasbourg.
Lectures
Introduction to Deep Learning, Léo Bois (Université de Strasbourg)
Convolutional Neural Networks for object dectection: fast and accurate results with the YOLO (You Only Look Once) method, David Cornu (Observatoire de Paris)
The objective of this series of courses is to provide a theoretical and practical overview of the YOLO (You Only Look Once) object detection method. The first session will introduce the different Convolutional Neural Network (CNN) based methods for object detection, and then focus on the theoretical principles behind the regression-based YOLO approach. The hands-on sessions will be dedicated to actual parametrization, training, and usage of such networks on classical datasets (PASCAL VOC, COCO, ...). Finally, we will discuss how this method can be modified to predict additional parameters for each object, still in the form of a single standalone network, and illustrate this capability for galaxy detection and characterization in radio-astronomical images.
Generative models for images, Bruno Galerne (Université d'Orléans)
The goal of this short course is to introduce the main deep generative models that have been developed this last decade. These models are practical solutions for the unsupervised learning problem of parametric modeling of any data distribution. The advances in deep learning representation have led to generative models enable to generate synthetic realistic data. The course will mainly focus on variational auto-encoders (VAE) and Generative Adversarial Networks (GAN). The mathematical modeling will be presented and the basic properties of fundamental tools for comparing distributions, such as Kullback-Leibler divergences and optimal transport metrics, will be recalled. Numerical examples will focus on image modeling although the range of applications of these generic models is broader.
Deep Learning and dynamical systems: applications in neuroimaging, François Rousseau (IMT Atlantique)
This course is an introduction to deep learning as dynamical systems. It will focus on the links between networks architectures and dynamical formulation of learning tasks. The hands-on sessions will address the implementation of image registration algorithms seen as dynamical systems for the analysis of neuroimaging data. The second part of the course will focus on continuous models used in image registration but also for normalizing flows.
Introduction to deep learning on graphs, Samuel Vaiter (CNRS, Université Côté d'Azur)
This course is an introduction to geometric deep learning with a focus on graphs and the mathematical aspects associated. After showing the main principles behind spectral and message passing graph neural networks, the students will be confronted with the implementation of simple models with the help of PyTorch Geometric. The last part of the course will be dedicated to recents advances in the mathematical analysis of the large relatively sparse random regime.
Talks