BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:TOUTELIA 2021 : Dynamics
DTSTART:20211201T080000Z
DTEND:20211201T164500Z
DTSTAMP:20230925T131400Z
UID:indico-event-7133@indico.math.cnrs.fr
CONTACT:mathieu.sablik@math.univ-toulouse.fr
DESCRIPTION:Speakers: mathieu Sablik (Institut de mathématiques de Toulou
se)\n\nDate : December the 1st 2021\n\nPlace : Institut de Mathématiques
de Toulouse (IMT)\, Amphithéatre Schwartz\n\n \n\n \n\nThis conference
will be part of a series of four similar sessions dedicated to interactio
ns of AI with other branches of mathematics. \n\nThe goal of the session w
ill be to outline some interactions between dynamical systems and AI. On o
ne side we will see how AI performs in studying geometric or topological o
bjects. On the otherside we will hear about techniques using geometric or
topological ideas to improve machine learning.\n\nThe speakers will be :\n
\n\n Peter Ashwin (University of Exeter)\n Nathanaël Fijalkow (Universit
é de Bordeaux)\n Juan-Pablo Ortega Lahuerta (Nanyang Technological Univer
sity)\n Panayotis Mertikopoulos (Université de Grenoble)\n\n\n \n\nA ten
tative schedule of the day is:\n\n\n 9:30 am to 10:30 am: Juan-Pablo Orteg
a Lahuerta Reservoir Computing and the Learning of Dynamic Processes\n 10
:30 am to 11:00 am: Coffee break\n 11:00 am to 12:00 am: Panayotis Mertiko
poulos Optimization\, games\, and dynamical systems\n 12:00 am to 2:00 pm:
Lunch break\n 2:00 pm to 3:00 pm: Nathanaël Fijalkow Synthèse de progra
mmes et apprentissage\n 3:00 pm to 3:30 pm: Coffee break\n 3:30 pm to 4:30
pm: Peter Ashwin Computational properties of network attractors\n\n\n----
-------------------------------------\n\n \nList of titles and abstracts.
\n\n\nComputational properties of network attractors (Peter Ashwin)\n\nAbs
tract: In this talk I will explore models for the nonlinear dynamics
of input-driven computation that makes use of attractors consisting of ex
citable or heteroclinic connections between invariant sets that may themse
lves be attractors or saddles. We explore such structures in abstract as w
ell as specific examples where they can arise. It is possible to embed app
roximations of arbitrarily complex finite state machine behaviour using ne
twork attractors. One can also use network attractors to understand functi
on and malfunction of recurrent neural networks trained to perform simple
tasks that require some level of memory.\n\nSynthèse de programmes et app
rentissage (Nathanaël Fijalkow)\n\nRésumé : Depuis une petite dizai
ne d'années\, les méthodes d'apprentissage sont très en vogue pour la s
ynthèse de programmes.\nJe discuterai de ce que recouvrent les termes de
la phrase précédente\, et ensuite expliquerai quels sont les enjeux pour
mettre ces idées en œuvre. Concrètement\, il va falloir explorer un sy
stème dynamique chaotique\, décrit par l'ensemble des programmes et leur
s prédictions à l'aide d'une grammaire probabiliste hors contexte.\n\nRe
servoir Computing and the Learning of Dynamic Processes (Juan-Pablo Ortega
Lahuerta)\n\nAbstract: Dynamic processes regulate the behaviour of
virtually any artificial and biological agent\, from stock markets to epid
emics\, from driverless cars to healthcare robots. The problem of modeling
\, forecasting\, and generally speaking learning dynamic processes is one
of the most classical\, sophisticated\, and strategically significant prob
lems in the natural and the social sciences. In this talk we shall discuss
both classical and recent results on the modeling and learning of dynamic
al systems and input/output systems using an approach generically known as
reservoir computing. This information processing framework is characteriz
ed by the use of cheap-to-train randomly generated state-space systems for
which promising high-performance physical realizations with dedicated har
dware have been proposed in recent years. In our presentation we shall put
a special emphasis in the approximation properties of these constructions
.\n\nOptimization\, games\, and dynamical systems (Panayotis Mertikopoulos
)\n\nAbstract: This talk aims to survey the triple-point interface b
etween optimization\, game theory\, and dynamical systems with a view towa
rds their applications to machine learning and data science. We will begin
by discussing how the ordinary differential equation (ODE) method of stoc
hastic approximation can be used to analyze the trajectories of a wide arr
ay of stochastic first-order algorithms in non-convex minimization problem
s and games. The key notion here is that of an internally chain transitive
(ICT) set: in minimization problems\, ICT sets correspond to the problem'
s components of critical points\, and we discuss a range of conditions gua
ranteeing convergence to minimizers while avoiding unstable saddle points.
Similar results can also be derived for min-max problems and games: unsta
ble stationary points are avoided and stable Nash equilibria are attractin
g with probability $1$ (or with arbitrarily high probability in the local
case). However\, despite these encouraging results\, the overall situation
can be considerably more involved\, and the sequence of play may converge
with arbitrarily high probability to lower-dimensional manifolds that are
in no way unilaterally stable (or even stationary). "Spurious attractors"
of this type can arise even in simple two-player zero-sum games with one-
dimensional action sets and polynomial losses of degree 4\, a fact which h
ighlights the fundamental gap between minimization problems and games.\n\n
https://indico.math.cnrs.fr/event/7133/
LOCATION:Amphiteatre Schwartz
URL:https://indico.math.cnrs.fr/event/7133/
END:VEVENT
END:VCALENDAR