Aug 25 – 29, 2025
IRMA
Europe/Paris timezone
Registration deadline is Monday June 30 (see link at bottom of page).

This summer school offers an introduction to deep learning and its various applications, such as physical modeling and the formalization of mathematics. It covers several recent techniques, such as transformers or Large Language Models, as well as mathematical methods to analyze their learning process.

The summer school is composed of five lectures and three talks. Each lecture (2x1h30) will be complemented by 2 hours of tutorials. It is primarily intended for the students of the graduate program "Mathematics and interactions: research and interactions" of the University of Strasbourg but is also accessible to any interested PhD student or researcher. As space is limited, priority will be given to PhD students. This event is supported by the Interdisciplinary Thematic Institute IRMIA++

It will take place from 25 to 29 August in the IRMA conference room at the University of Strasbourg.   


Lectures 

Introduction to deep learningAntoine Deleforge (INRIA, Université de Strasbourg) 

Formalizing mathematics with Large Language ModelsMarc Lelarge (INRIA, ENS) 

Transformers and flows in the space of probability measuresDomenech Ruiz I Balet (Université Paris Dauphine) 

In this series of lectures, we will investigate mathematical properties of transformers. We will see them as partial differential equations and we will discuss mathematical properties of them, such as universal approximation, clustering and other phenomena. The lectures are complemented with a coding session.

Deep Reinforcement Learning and AI for Symbolic MathematicsWassim Tenachi (MILA, Université de Montréal) 

In this course, I will introduce deep reinforcement learning (RL) - the machine learning approach behind many of the impressive videos we have all seen of agents mastering video games - while highlighting its much broader potential. The session is designed to be intuitive and accessible, aiming to give students a solid grasp of what RL is capable of, and how it works. I will cover the fundamental principles and real-world use cases of RL, a paradigm where agents learn through trial and error in simulated or real environments, without necessarily having access to explicit gradients. Instead, they learn by approximating the gradients required to train neural networks. To ground these ideas, I will present a case study from my own research in AI for symbolic mathematics, where machine learning models assist in tasks like theorem proving, symbolic equation discovery, and even replicating aspects of empirical sciences - such as physics or astrophysics - by uncovering analytical expressions that model observed data: a field known as symbolic regression. We will in particular explore how numerical models like neural networks can be interfaced with symbolic mathematical structures, and how such hybrid systems can learn to achieve specific reasoning goals. In the practical session, we will dive into some simple yet fun toy examples to better understand both reinforcement learning and symbolic regression. These hands-on activities are meant to give students a foundation they can build on in future projects or research.

Time series reasoning with language modelsSvitlana Vyetrenko (Université de Strasbourg) 

Talks 

title, to be announced

title, to be announced

Starts
Ends
Europe/Paris
IRMA
Salle de Conférences
Université de Strasbourg
Go to map

Organizers

  • Jonathan Freundlich (Observatoire de Strasbourg)
  • Philippe Helluy (IRMA, Université de Strasbourg)
  • Nicolas Magaud (ICUBE, Université de Strasbourg)
  • Chloé Thibaudeau  (ITI IRMIA++, Université de Strasbourg)
  • Laurent Navoret (IRMA, Université de Strasbourg)

 

Application
Application for this event is currently open.