BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:TOUTELIA 2021 : Dynamics
DTSTART;VALUE=DATE-TIME:20211201T080000Z
DTEND;VALUE=DATE-TIME:20211201T164500Z
DTSTAMP;VALUE=DATE-TIME:20220924T162400Z
UID:indico-event-7133@indico.math.cnrs.fr
CONTACT:mathieu.sablik@math.univ-toulouse.fr
DESCRIPTION:Date : December the 1st 2021\n\nPlace : Institut de Mathémat
iques de Toulouse (IMT)\, Amphithéatre Schwartz\n\n \n\n \n\nThis confe
rence will be part of a series of four similar sessions dedicated to inter
actions of AI with other branches of mathematics. \n\nThe goal of the sess
ion will be to outline some interactions between dynamical systems and AI.
On one side we will see how AI performs in studying geometric or topologi
cal objects. On the otherside we will hear about techniques using geometri
c or topological ideas to improve machine learning.\n\nThe speakers will b
e :\n\n\n Peter Ashwin (University of Exeter)\n Nathanaël Fijalkow (Unive
rsité de Bordeaux)\n Juan-Pablo Ortega Lahuerta (Nanyang Technological Un
iversity)\n Panayotis Mertikopoulos (Université de Grenoble)\n\n\n \n\nA
tentative schedule of the day is:\n\n\n 9:30 am to 10:30 am: Juan-Pablo O
rtega Lahuerta Reservoir Computing and the Learning of Dynamic Processes\
n 10:30 am to 11:00 am: Coffee break\n 11:00 am to 12:00 am: Panayotis Mer
tikopoulos Optimization\, games\, and dynamical systems\n 12:00 am to 2:00
pm: Lunch break\n 2:00 pm to 3:00 pm: Nathanaël Fijalkow Synthèse de pr
ogrammes et apprentissage\n 3:00 pm to 3:30 pm: Coffee break\n 3:30 pm to
4:30 pm: Peter Ashwin Computational properties of network attractors\n\n\n
-----------------------------------------\n\n \nList of titles and abstra
cts.\n\n\nComputational properties of network attractors (Peter Ashwin)\n\
nAbstract: In this talk I will explore models for the nonlinear dyna
mics of input-driven computation that makes use of attractors consisting o
f excitable or heteroclinic connections between invariant sets that may th
emselves be attractors or saddles. We explore such structures in abstract
as well as specific examples where they can arise. It is possible to embed
approximations of arbitrarily complex finite state machine behaviour usin
g network attractors. One can also use network attractors to understand fu
nction and malfunction of recurrent neural networks trained to perform sim
ple tasks that require some level of memory.\n\nSynthèse de programmes et
apprentissage (Nathanaël Fijalkow)\n\nRésumé : Depuis une petite d
izaine d'années\, les méthodes d'apprentissage sont très en vogue pour
la synthèse de programmes.\nJe discuterai de ce que recouvrent les termes
de la phrase précédente\, et ensuite expliquerai quels sont les enjeux
pour mettre ces idées en œuvre. Concrètement\, il va falloir explorer u
n système dynamique chaotique\, décrit par l'ensemble des programmes et
leurs prédictions à l'aide d'une grammaire probabiliste hors contexte.\n
\nReservoir Computing and the Learning of Dynamic Processes (Juan-Pablo Or
tega Lahuerta)\n\nAbstract: Dynamic processes regulate the behaviour
of virtually any artificial and biological agent\, from stock markets to
epidemics\, from driverless cars to healthcare robots. The problem of mode
ling\, forecasting\, and generally speaking learning dynamic processes is
one of the most classical\, sophisticated\, and strategically significant
problems in the natural and the social sciences. In this talk we shall dis
cuss both classical and recent results on the modeling and learning of dyn
amical systems and input/output systems using an approach generically know
n as reservoir computing. This information processing framework is charact
erized by the use of cheap-to-train randomly generated state-space systems
for which promising high-performance physical realizations with dedicated
hardware have been proposed in recent years. In our presentation we shall
put a special emphasis in the approximation properties of these construct
ions.\n\nOptimization\, games\, and dynamical systems (Panayotis Mertikopo
ulos)\n\nAbstract: This talk aims to survey the triple-point interfa
ce between optimization\, game theory\, and dynamical systems with a view
towards their applications to machine learning and data science. We will b
egin by discussing how the ordinary differential equation (ODE) method of
stochastic approximation can be used to analyze the trajectories of a wide
array of stochastic first-order algorithms in non-convex minimization pro
blems and games. The key notion here is that of an internally chain transi
tive (ICT) set: in minimization problems\, ICT sets correspond to the prob
lem's components of critical points\, and we discuss a range of conditions
guaranteeing convergence to minimizers while avoiding unstable saddle poi
nts. Similar results can also be derived for min-max problems and games: u
nstable stationary points are avoided and stable Nash equilibria are attra
cting with probability $1$ (or with arbitrarily high probability in the lo
cal case). However\, despite these encouraging results\, the overall situa
tion can be considerably more involved\, and the sequence of play may conv
erge with arbitrarily high probability to lower-dimensional manifolds that
are in no way unilaterally stable (or even stationary). "Spurious attract
ors" of this type can arise even in simple two-player zero-sum games with
one-dimensional action sets and polynomial losses of degree 4\, a fact whi
ch highlights the fundamental gap between minimization problems and games.
\n\nhttps://indico.math.cnrs.fr/event/7133/
LOCATION:Amphiteatre Schwartz
URL:https://indico.math.cnrs.fr/event/7133/
END:VEVENT
END:VCALENDAR