Summer school EUR MINT 2024 "Random Matrices and Free Probability"
de
lundi 10 juin 2024 (09:00)
à
mardi 18 juin 2024 (18:15)
lundi 10 juin 2024
10:00
Introduction to free probability 1/4
-
Guillaume Cébron
Introduction to free probability 1/4
Guillaume Cébron
10:00 - 12:00
Room: Amphithéâtre Schwartz
The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.
14:00
Introduction to random matrices 1/4
-
François Chapon
Introduction to random matrices 1/4
François Chapon
14:00 - 16:00
Room: Amphithéâtre Schwartz
16:00
Coffee break
Coffee break
16:00 - 16:30
Room: Amphithéâtre Schwartz
16:30
Introduction to free probability 2/4
-
Guillaume Cébron
Introduction to free probability 2/4
Guillaume Cébron
16:30 - 18:00
Room: Amphithéâtre Schwartz
The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.
mardi 11 juin 2024
10:00
Introduction to random matrices 2/4
-
François Chapon
Introduction to random matrices 2/4
François Chapon
10:00 - 12:30
Room: Amphithéâtre Schwartz
14:00
Introduction to free probability 3/4
-
Guillaume Cébron
Introduction to free probability 3/4
Guillaume Cébron
14:00 - 16:00
Room: Amphithéâtre Schwartz
The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.
16:00
Coffee break
Coffee break
16:00 - 16:30
Room: Amphithéâtre Schwartz
16:30
Introduction to random matrices 3/4
-
François Chapon
Introduction to random matrices 3/4
François Chapon
16:30 - 18:00
Room: Amphithéâtre Schwartz
mercredi 12 juin 2024
10:00
Introduction to free probability 4/4
-
Guillaume Cébron
Introduction to free probability 4/4
Guillaume Cébron
10:00 - 12:30
Room: Amphithéâtre Schwartz
The aim of this course is to present the concept of free independence, the related central limit theorem, the notion of free cumulants, and the use of free independence to study large random matrices.
12:30
Buffet
Buffet
12:30 - 14:00
Room: Amphithéâtre Schwartz
jeudi 13 juin 2024
10:00
Deformed matricial models and free probability theory 1/3
-
Capitaine Mireille
Deformed matricial models and free probability theory 1/3
Capitaine Mireille
10:00 - 12:00
Room: Amphithéâtre Schwartz
Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.
14:00
Introduction to random matrices 4/4
-
François Chapon
Introduction to random matrices 4/4
François Chapon
14:00 - 16:00
Room: Amphithéâtre Schwartz
16:00
Coffee break
Coffee break
16:00 - 16:30
Room: Amphithéâtre Schwartz
16:30
Deformed matricial models and free probability theory 2/3
-
Capitaine Mireille
Deformed matricial models and free probability theory 2/3
Capitaine Mireille
16:30 - 18:00
Room: Amphithéâtre Schwartz
Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.
vendredi 14 juin 2024
09:30
Beta ensembles 1/3
-
Chhaibi Reda
Beta ensembles 1/3
Chhaibi Reda
09:30 - 11:00
Room: Amphithéâtre Schwartz
11:00
Coffee break
Coffee break
11:00 - 11:30
Room: Amphithéâtre Schwartz
11:30
Deformed matricial models and free probability theory 3/3
-
Capitaine Mireille
Deformed matricial models and free probability theory 3/3
Capitaine Mireille
11:30 - 12:30
Room: Amphithéâtre Schwartz
Practical problems naturally lead to wonder about the spectrum reaction of a given random matrix after a deterministic perturbation. For example, in the signal theory, the deterministic perturbation is seen as the signal, the perturbed matrix is perceived as a "noise" and the question is to know whether the observation of the spectral properties of "signal plus noise" can give access to significant parameters on the signal. A typical illustration is the so-called BBP phenomenon (after Baik, Ben Arous, Péché) which put forward outliers (eigenvalues that move away from the rest of the spectrum) and their Gaussian fluctuations for spiked covariance matrices. The aim of this lecture is to show how free probability theory sheds light on spectral properties of deformed matricial models and provides a unified understanding of various phenomena.
14:00
Beta ensembles 2/3
-
Chhaibi Reda
Beta ensembles 2/3
Chhaibi Reda
14:00 - 15:30
Room: Amphithéâtre Schwartz
15:30
Coffee break
Coffee break
15:30 - 16:00
Room: Amphithéâtre Schwartz
16:00
Beta ensembles 3/3
-
Chhaibi Reda
Beta ensembles 3/3
Chhaibi Reda
16:00 - 17:30
Room: Amphithéâtre Schwartz
samedi 15 juin 2024
dimanche 16 juin 2024
lundi 17 juin 2024
09:00
Large deviations for the largest eigenvalues of random matrices 1/3
-
Alice Guionnet
Large deviations for the largest eigenvalues of random matrices 1/3
Alice Guionnet
09:00 - 10:30
Room: Amphithéâtre Schwartz
Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions. In general, this is a difficult question, as the law of these eigenvalues is not explicit. In this course, we will discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No knowledge of large deviation theory is required.
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: Amphithéâtre Schwartz
11:00
Random matrices and dynamics of optimization in very high dimensions 1/3
-
Gérard Ben Arous
Random matrices and dynamics of optimization in very high dimensions 1/3
Gérard Ben Arous
11:00 - 12:30
Room: Amphithéâtre Schwartz
Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively. I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes. I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions. I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm. The next step will be to understand how the system finds these low dimensional “summary statistics”. RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)). This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics. I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.
14:00
Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 1/2
-
Jamal Najim
Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 1/2
Jamal Najim
14:00 - 16:15
Room: Amphithéâtre Schwartz
Large Lotka-Volterra (LV) systems of coupled ODE are a popular model for complex systems in interaction, in particular large ecological systems. Since the « real » coupling between the differential equations is in general out of reach, a coupling based on the realization of a large random matrix is often used in practice. Within this framework, we shall discuss the existence of an equilibrium, its stability and its statistical properties, such as the proportion of non-vanishing components of the equilibrium, etc. We will focus on non-Hermitian random matrix models such as Ginibre and elliptic matrices and will show how techniques borrowed from Approximate Message Passing (AMP) enable us to capture the statistical properties of the equilibria. Here are subjects we intend to cover during these lectures Basic properties of non-hermitian matrix models (circular law, elliptic model) Approximate Message Passing for elliptic matrix models A specific AMP algorithm to compute the equilibrium of a large LV system. Joint work with I. Akjouj, Y. Gueddari, W. Hachem, M. Maïda (and others!). https://arxiv.org/abs/2302.07820 https://arxiv.org/abs/2402.08271 https://arxiv.org/abs/2212.06136
16:15
Coffee break
Coffee break
16:15 - 16:45
Room: Amphithéâtre Schwartz
16:45
Large deviations for the largest eigenvalues of random matrices 2/3
-
Alice Guionnet
Large deviations for the largest eigenvalues of random matrices 2/3
Alice Guionnet
16:45 - 18:15
Room: Amphithéâtre Schwartz
Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions. In general, this is a difficult question, as the law of these eigenvalues is not explicit. In this course, we will discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No knowledge of large deviation theory is required.
mardi 18 juin 2024
09:00
Random matrices and dynamics of optimization in very high dimensions 2/3
-
Gérard Ben Arous
Random matrices and dynamics of optimization in very high dimensions 2/3
Gérard Ben Arous
09:00 - 10:30
Room: Amphithéâtre Schwartz
Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively. I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes. I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions. I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm. The next step will be to understand how the system finds these low dimensional “summary statistics”. RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)). This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics. I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.
10:30
Coffee break
Coffee break
10:30 - 11:00
Room: Amphithéâtre Schwartz
11:00
Large deviations for the largest eigenvalues of random matrices 3/3
-
Alice Guionnet
Large deviations for the largest eigenvalues of random matrices 3/3
Alice Guionnet
11:00 - 12:30
Room: Amphithéâtre Schwartz
Estimating the probabilities of large deviations of extreme eigenvalues of random matrices is necessary to estimate the volume of minima of random functions. In general, this is a difficult question, as the law of these eigenvalues is not explicit. In this course, we will discuss the known results in this field, and the different methods of obtaining them, as well as open problems. No knowledge of large deviation theory is required.
14:00
Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 2/2
-
Jamal Najim
Equilibria in large Lotka-Volterra systems of ODE coupled by large random matrices 2/2
Jamal Najim
14:00 - 16:15
Room: Amphithéâtre Schwartz
Large Lotka-Volterra (LV) systems of coupled ODE are a popular model for complex systems in interaction, in particular large ecological systems. Since the « real » coupling between the differential equations is in general out of reach, a coupling based on the realization of a large random matrix is often used in practice. Within this framework, we shall discuss the existence of an equilibrium, its stability and its statistical properties, such as the proportion of non-vanishing components of the equilibrium, etc. We will focus on non-Hermitian random matrix models such as Ginibre and elliptic matrices and will show how techniques borrowed from Approximate Message Passing (AMP) enable us to capture the statistical properties of the equilibria. Here are subjects we intend to cover during these lectures Basic properties of non-hermitian matrix models (circular law, elliptic model) Approximate Message Passing for elliptic matrix models A specific AMP algorithm to compute the equilibrium of a large LV system. Joint work with I. Akjouj, Y. Gueddari, W. Hachem, M. Maïda (and others!). https://arxiv.org/abs/2302.07820 https://arxiv.org/abs/2402.08271 https://arxiv.org/abs/2212.06136
16:15
Coffee break
Coffee break
16:15 - 16:45
Room: Amphithéâtre Schwartz
16:45
Random matrices and dynamics of optimization in very high dimensions 3/3
-
Gérard Ben Arous
Random matrices and dynamics of optimization in very high dimensions 3/3
Gérard Ben Arous
16:45 - 18:15
Room: Amphithéâtre Schwartz
Machine learning and Data science algorithms include the need for efficient optimization of topologically complex random functions in very high dimensions. Surprisingly, simple algorithms like Stochastic Gradient Descent (with small batches) are used very effectively. I will concentrate on trying to understand why these simple tools can still work in these complex and very over-parametrized regimes. I will first introduce the whole framework for non-experts, from the structure of the typical tasks to the natural structures of simple neural nets used in standard contexts. l will then cover briefly the classical and usual context of SGD in finite dimensions. I will then survey recent work with Reza Gheissari (Northwestern), Aukosh Jagannath (Waterloo) giving a general view for the existence of projected “effective dynamics" for "summary statistics” in much smaller dimensions, which still rule the performance of very high dimensional systems, as well . These effective dynamics define a dynamical system in finite dimensions which may be quite complex, and rules the performance of the learning algorithm. The next step will be to understand how the system finds these low dimensional “summary statistics”. RMT enters the game for this next step (which is done in the next works with the same authors and with Jiaoyang Huang (Wharton, U-Penn)). This is based on a dynamical spectral transition: along the trajectory of the optimization path, the Gram matrix or the Hessian matrix develop BBP outliers which carry these effective dynamics. I will illustrate the use of this point of view on a few central examples of ML: multilayer neural nets for classification (of Gaussian mixtures), and the XOR examples, for instance.