Summer school EUR MINT 2024 "Random Matrices and Free Probability"

Europe/Paris
Amphithéâtre Schwartz (Institut de Mathématiques)

Amphithéâtre Schwartz

Institut de Mathématiques

Université Toulouse 3 Paul Sabatier 118 Route de Narbonne Institut de Mathématiques- Bâtiment 1R3 Toulouse
Guillaume CEBRON
Description

This Summer School aims at introducing Master and Doctoral students to quite a few topics of current research in random matrices. Post-docs applications are also welcome and will receive proper consideration. If you are interested by this Summer School, you can register yourself with the form.

The school will start on Monday 10th of June and end on Tuesday 18th of June, and will be followed by a conference on random matrices from 19th of June to 21th of June that you can attend (https://indico.math.cnrs.fr/event/10927/).

The school will feature 7 mini-courses of varying degrees of difficulty. 5 courses will take a slightly more advanced standpoint which, in any case, should be accessible to advanced Master students (in the European sense which can be compared to first year graduate students in the American continent). 2 basic courses require less prerequisite and are accessible to undergraduate students (3td year of undergraduate studies in the European system).

Basic courses
Guillaume Cébron - Introduction to Free Probability Theory (1rst week)
François Chapon - Introduction to Random Matrices (1rst week)

Advanced courses
Gérard Ben Arous (2nd week)
Mireille Capitaine (1rst week)
Reda Chhaibi (1rst week)
Alice Guionnet (2nd week)
Jamal Najim (2nd week)

The timetable is now available
 

Inscription
Accommodation confirmation for non local students
Participants
  • Aabhas Gulati
  • Ahmed SOUABNI
  • Andreas Malliaris
  • Arianna Piana
  • Ayush Bidlan
  • David GARCIA ZELADA
  • Ena Jahic
  • Helene Götz
  • Issa-Mbenard Dabo
  • Jason Beh
  • Jeong Yoonje
  • Marwa Banna
  • MICHAIL LOUVARIS
  • Michel Pain
  • Mondher Chouikhi
  • Panagiotis Zografos
  • Peng TIAN
  • Ronan Memin
  • Rémi Bonnin
  • Teodor Bucht
  • Thomas Buc--d'Alché
  • Vanessa Piccolo
  • Yanxing Chen
  • Zikun Ouyang
  • Zixin Ye
  • +15
    • 1
      Introduction to free probability 1/4

      The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.

      Orateur: Guillaume Cébron
    • 2
      Introduction to random matrices 1/4
      Orateur: François Chapon
    • 16:00
      Coffee break
    • 3
      Introduction to free probability 2/4

      The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.

      Orateur: Guillaume Cébron
    • 4
      Introduction to random matrices 2/4
      Orateur: François Chapon
    • 5
      Introduction to free probability 3/4

      The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.

      Orateur: Guillaume Cébron
    • 16:00
      Coffee break
    • 6
      Introduction to random matrices 3/4
      Orateur: François Chapon
    • 7
      Introduction to free probability 4/4

      The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.

      Orateur: Guillaume Cébron
    • 12:30
      Buffet
    • 8
      Deformed matricial models and free probability theory 1/3

      Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.

      Orateur: Capitaine Mireille
    • 9
      Introduction to random matrices 4/4
      Orateur: François Chapon
    • 16:00
      Coffee break
    • 10
      Deformed matricial models and free probability theory 2/3

      Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.

      Orateur: Capitaine Mireille
    • 11
      Beta ensembles 1/3
      Orateur: Chhaibi Reda
    • 11:00
      Coffee break
    • 12
      Deformed matricial models and free probability theory 3/3

      Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.

      Orateur: Capitaine Mireille
    • 13
      Beta ensembles 2/3
      Orateur: Chhaibi Reda
    • 15:30
      Coffee break
    • 14
      Beta ensembles 3/3
      Orateur: Chhaibi Reda
    • 15
      Large deviations for the largest eigenvalues of random matrices 1/3

      Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions.
      In general, this is a difficult question, as the law of these
      eigenvalues is not explicit. In this course, we will
      discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No
      knowledge of large deviation theory is required.

      Orateur: Alice Guionnet
    • 10:30
      Coffee break
    • 16
      Random matrices and dynamics of optimization in very high dimensions 1/3

      Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively.
      I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes.

      I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions.
      I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm.
      The next step will be to understand how the system finds these low dimensional “summary statistics”.
      RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)).
      This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics.
      I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.

      Orateur: Gérard Ben Arous
      • a) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • b) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • c) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
    • 17
      Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 1/2

      Large Lotka-Volterra (LV) systems of coupled ODE are a popular model for complex systems in interaction, in particular large ecological systems. Since the « real » coupling between the differential equations is in general out of reach, a coupling based on the realization of a large random matrix is often used in practice. Within this framework, we shall discuss the existence of an equilibrium, its stability and its statistical properties, such as the proportion of non-vanishing components of the equilibrium, etc. We will focus on non-Hermitian random matrix models such as Ginibre and elliptic matrices and will show how techniques borrowed from Approximate Message Passing (AMP) enable us to capture the statistical properties of the equilibria.

      Here are subjects we intend to cover during these lectures

      Basic properties of non-hermitian matrix models (circular law, elliptic model)
      Approximate Message Passing for elliptic matrix models
      A specific AMP algorithm to compute the equilibrium of a large LV system.
      

      Joint work with I. Akjouj, Y. Gueddari, W. Hachem, M. Maïda (and others!).

      https://arxiv.org/abs/2302.07820
      https://arxiv.org/abs/2402.08271
      https://arxiv.org/abs/2212.06136

      Orateur: Jamal Najim
    • 16:15
      Coffee break
    • 18
      Large deviations for the largest eigenvalues of random matrices 2/3

      Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions.
      In general, this is a difficult question, as the law of these
      eigenvalues is not explicit. In this course, we will
      discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No
      knowledge of large deviation theory is required.

      Orateur: Alice Guionnet
    • 19
      Random matrices and dynamics of optimization in very high dimensions 2/3

      Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively.
      I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes.

      I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions.
      I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm.
      The next step will be to understand how the system finds these low dimensional “summary statistics”.
      RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)).
      This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics.
      I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.

      Orateur: Gérard Ben Arous
      • a) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • b) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • c) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
    • 10:30
      Coffee break
    • 20
      Large deviations for the largest eigenvalues of random matrices 3/3

      Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions.
      In general, this is a difficult question, as the law of these
      eigenvalues is not explicit. In this course, we will
      discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No
      knowledge of large deviation theory is required.

      Orateur: Alice Guionnet
    • 21
      Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 2/2

      Large Lotka-Volterra (LV) systems of coupled ODE are a popular model for complex systems in interaction, in particular large ecological systems. Since the « real » coupling between the differential equations is in general out of reach, a coupling based on the realization of a large random matrix is often used in practice. Within this framework, we shall discuss the existence of an equilibrium, its stability and its statistical properties, such as the proportion of non-vanishing components of the equilibrium, etc. We will focus on non-Hermitian random matrix models such as Ginibre and elliptic matrices and will show how techniques borrowed from Approximate Message Passing (AMP) enable us to capture the statistical properties of the equilibria.

      Here are subjects we intend to cover during these lectures

      Basic properties of non-hermitian matrix models (circular law, elliptic model)
      Approximate Message Passing for elliptic matrix models
      A specific AMP algorithm to compute the equilibrium of a large LV system.
      

      Joint work with I. Akjouj, Y. Gueddari, W. Hachem, M. Maïda (and others!).

      https://arxiv.org/abs/2302.07820
      https://arxiv.org/abs/2402.08271
      https://arxiv.org/abs/2212.06136

      Orateur: Jamal Najim
    • 16:15
      Coffee break
    • 22
      Random matrices and dynamics of optimization in very high dimensions 3/3

      Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively.
      I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes.

      I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions.
      I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm.
      The next step will be to understand how the system finds these low dimensional “summary statistics”.
      RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)).
      This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics.
      I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.

      Orateur: Gérard Ben Arous
      • a) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • b) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous
      • c) Random matrices and dynamics of optimization in very high dimensions
        Orateur: Gérard Ben Arous