1 December 2021
Europe/Paris timezone

Date : December the 1st 2021

Place : Institut de Mathématiques de Toulouse (IMT), Amphithéatre Schwartz



This conference will be part of a series of four similar sessions dedicated to interactions of AI with other branches of mathematics.

The goal of the session will be to outline some interactions between dynamical systems and AI. On one side we will see how AI performs in studying geometric or topological objects. On the otherside we will hear about techniques using geometric or topological ideas to improve machine learning.

The speakers will be :


A tentative schedule of the day is:

  • 9:30 am to 10:30 am: Juan-Pablo Ortega Lahuerta Reservoir Computing and the Learning of Dynamic Processes
  • 10:30 am to 11:00 am: Coffee break
  • 11:00 am to 12:00 am: Panayotis Mertikopoulos Optimization, games, and dynamical systems
  • 12:00 am to 2:00 pm: Lunch break
  • 2:00 pm to 3:00 pm: Nathanaël Fijalkow Synthèse de programmes et apprentissage
  • 3:00 pm to 3:30 pm: Coffee break
  • 3:30 pm to 4:30 pm: Peter Ashwin Computational properties of network attractors



List of titles and abstracts.

Computational properties of network attractors (Peter Ashwin)

Abstract:    In this talk I will explore models for the nonlinear dynamics of input-driven computation that makes use of attractors consisting of excitable or heteroclinic connections between invariant sets that may themselves be attractors or saddles. We explore such structures in abstract as well as specific examples where they can arise. It is possible to embed approximations of arbitrarily complex finite state machine behaviour using network attractors. One can also use network attractors to understand function and malfunction of recurrent neural networks trained to perform simple tasks that require some level of memory.

Synthèse de programmes et apprentissage (Nathanaël Fijalkow)

Résumé :   Depuis une petite dizaine d'années, les méthodes d'apprentissage sont très en vogue pour la synthèse de programmes.
Je discuterai de ce que recouvrent les termes de la phrase précédente, et ensuite expliquerai quels sont les enjeux pour mettre ces idées en œuvre. Concrètement, il va falloir explorer un système dynamique chaotique, décrit par l'ensemble des programmes et leurs prédictions à l'aide d'une grammaire probabiliste hors contexte.

Reservoir Computing and the Learning of Dynamic Processes (Juan-Pablo Ortega Lahuerta)

Abstract:    Dynamic processes regulate the behaviour of virtually any artificial and biological agent, from stock markets to epidemics, from driverless cars to healthcare robots. The problem of modeling, forecasting, and generally speaking learning dynamic processes is one of the most classical, sophisticated, and strategically significant problems in the natural and the social sciences. In this talk we shall discuss both classical and recent results on the modeling and learning of dynamical systems and input/output systems using an approach generically known as reservoir computing. This information processing framework is characterized by the use of cheap-to-train randomly generated state-space systems for which promising high-performance physical realizations with dedicated hardware have been proposed in recent years. In our presentation we shall put a special emphasis in the approximation properties of these constructions.

Optimization, games, and dynamical systems (Panayotis Mertikopoulos)

Abstract:    This talk aims to survey the triple-point interface between optimization, game theory, and dynamical systems with a view towards their applications to machine learning and data science. We will begin by discussing how the ordinary differential equation (ODE) method of stochastic approximation can be used to analyze the trajectories of a wide array of stochastic first-order algorithms in non-convex minimization problems and games. The key notion here is that of an internally chain transitive (ICT) set: in minimization problems, ICT sets correspond to the problem's components of critical points, and we discuss a range of conditions guaranteeing convergence to minimizers while avoiding unstable saddle points. Similar results can also be derived for min-max problems and games: unstable stationary points are avoided and stable Nash equilibria are attracting with probability $1$ (or with arbitrarily high probability in the local case). However, despite these encouraging results, the overall situation can be considerably more involved, and the sequence of play may converge with arbitrarily high probability to lower-dimensional manifolds that are in no way unilaterally stable (or even stationary). "Spurious attractors" of this type can arise even in simple two-player zero-sum games with one-dimensional action sets and polynomial losses of degree 4, a fact which highlights the fundamental gap between minimization problems and games.

Amphiteatre Schwartz
IMT, Université Paul Sabatier
Go to map
Registration for this event is currently open.