GDR Mascot-Num : Workshop on Physics Informed Learning

Amphi Schwartz (IMT)

Amphi Schwartz


Université Paul Sabatier, 118 Route de Narbonne, 31000 Toulouse France
Olivier Roustant (IMT), Sébastien Da Veiga

The workshop aims at presenting diverse and complementary viewpoints on how to integrate information from physics inside statistical models.

It is organized by Sébastien Da VEIGA (ENSAI Rennes) and Olivier ROUSTANT (INSA Toulouse), for the GdR MascotNum:

This is a face-to-face event. For people that cannot be present, attendance by video-conference is possible, using this zoom link

Schedule : see "ordre du jour" in the menu on the left. 

Speakers slides: they are now online (menu "ordre du jour", top right corners, or menu "Liste des contributions").

Confirmed speakers

  • Nathan DOUMECHE (Sorbonne Université)
  • Emmanuel FRANCK (INRIA)
  • Iain HENDERSON (Institut de Mathématiques de Toulouse)
  • Vincent LE GUEN (EDF R&D)
  • Lukas NOVAK (Brno University of Technology)
  • Paul NOVELLO (IRT Saint-Exupéry)
  • Philipp TRUNSCHKE (Université de Nantes)


  • Abdoul Razac Sané
  • Adama Barry
  • Amir Zoet
  • Anthony Nouy
  • Antoine Gomond
  • Babacar SOW
  • Baptiste Kerleguer
  • Ben-Mekki AYADI
  • Benjamin Larvaron
  • Chakir Tajani
  • Christian Gogu
  • Claire Cannamela
  • Clément Lejeune
  • Clément Targe
  • Clémentine PRIEUR
  • Corentin Friedrich
  • Cécile Haberstich
  • Damien Bonnet Eymard
  • Daniel Busby
  • Daouda PENE
  • David Gaudrie
  • David Métivier
  • Didier Lucor
  • Edern Menou
  • Edgar Jaber
  • Elodie NOELE
  • Eric FOCK
  • Eric Savin
  • Erwan Viala
  • Flore Molenda
  • Florian Gossard
  • Frederic Couderc
  • Frédéric Allaire
  • Gabriel Depaillat
  • guillaume perrin
  • Hadi NASSER
  • Hamza Belkarkor
  • Hugo BOULENC
  • Inês da Costa Cardoso
  • Jason Beh
  • Jean Demange
  • Jean-Marc Bourinet
  • Jean-Paul Travert
  • Jerome Monnier
  • Jerome Morio
  • Joel Soffo
  • Jules-Edouard Denis
  • Julien Demange-Chryst
  • Julien Pelamatti
  • Khadidja Sabri
  • Krisztina Sinkovics
  • Laurent Desmet
  • Laury-Ann Boucherit
  • Loïc Patigny
  • Lucia Clarotto
  • Ludovic BARRIERE
  • Marie Haghebaert
  • Marine Dumon
  • Marlon Botte
  • Matheus Gonçalves
  • Mathieu Ducros
  • Mathilde Mougeot
  • Matthias De Lozzo
  • Matthieu Degeiter
  • Melanie Ducoffe
  • Merveille Talla
  • Michaël Zamo
  • Michele Alessandro BUCCI
  • Mickael Binois
  • morlier joseph
  • Mouhamed DIOP
  • Mouhcine Mendil
  • Mustapha Allabou
  • Nabir Mamnun
  • Nassim Razaaly
  • Nathan Ricard
  • Nicolas Bousquet
  • Nicolas Jouvin
  • Nicolas Lamarque
  • noura dridi
  • Olivier Dupont
  • Olivier Flebus
  • Olivier Léon
  • Olivier Pannekoucke
  • Olivier Roustant
  • Olivier Sapin
  • pascal noble
  • Paul Mycek
  • Paul SAVES
  • Pauline Rotach
  • Perre-Louis Antonsanti
  • Rachid El Montassir
  • Reza Allahvirdizadeh
  • rodolphe le riche
  • Romain Jorge Do Marco
  • Rowan Kearney-Lunch
  • Sidonie Lefebvre
  • Sofiane Haddad
  • Sébastien Da Veiga
  • Tamadur Albaraghtheh
  • Thi Nguyen Khoa Nguyen
  • Thierry Gonon
  • thomas houret
  • Thomas Mullor
  • Tittarelli Roberta
  • Vincent ROCHER
  • Virgile Foy
  • Wilfried Genuist
  • William PIAT
  • Xavier Roynard
  • Yanfei Xiang
  • Yasmine Hawwari
  • Yuri SÁO
  • Zhongliang LI
  • Monday, December 4
    • 1
      Some statistical insights into PINNs

      Physics-informed neural networks (PINNs) combine the expressiveness of neural networks with the interpretability of physical modeling. Their good practical performance has been demonstrated both in the context of solving partial differential equations and in the context of hybrid modeling, which consists of combining an imperfect physical model with noisy observations. However, most of their theoretical properties remain to be established. We offer some statistical guidelines into the proper use of PINNs.

      Speaker: Nathan DOUMECHE (Sorbonne Université)
    • 2
      Towards instance-dependent approximation guarantees for scientific machine learning using Lipschitz neural networks.

      Neural networks are increasingly used in scientific computing. Indeed, once trained, they can approximate highly complex, non-linear, and high dimensional functions with significantly reduced computational overhead compared to traditional simulation codes based on finite-differences methods. However, unlike conventional simulation whose error can be controlled, neural networks are statistical, data-driven models, for which no approximation error guarantee can be inherently provided. This limitation hinders the use of neural networks on par with finite elements-based simulation codes in scientific computing. In this presentation, we show how to leverage the Lipschitz property of Lipschitz neural networks to establish strict post-training – instance dependent -- error bounds given a set of validation points. We show how to derive error bounds using Voronoï diagrams for a Lipschitz neural network approximating a K-Lipschitz function by taking advantage of recent parallel algorithms. Yet, in most scientific computing applications, the Lipschitz constant of the target function remains unknown. Therefore, we explore strategies to adapt and extend these bounds to the case of unknown Lipschitz constant and illustrate them on simple physical simulation test cases.

      Speaker: Paul NOVELLO (IRT Saint-Exupéry)
    • 3:30 PM
    • 3:45 PM
      Coffee break
    • 3
      Physics-informed Gaussian process regression : theory and applications

      Gaussian process regression (GPR) is the Bayesian formulation of kernel regression methods used in machine learning. This method may be used to treat regression problems stemming from physical models, the latter typically taking the form of partial differential equations (PDEs).

      In this presentation, we study the question of the design of GPR methods, in relation with a target PDE model. We first provide several necessary and sufficient conditions describing how to rigorously impose certain physical constraints (explicitly, the distributional PDE constraint if the PDE is linear, and the control of the W^{m,p} Sobolev energy norm) on the realizations of a given Gaussian process. These results only involve the kernel of the Gaussian process.

      We then provide a simple application test case, with the estimation of the solution of the 3D wave equation (central in acoustics), as well as the estimation of the physical parameters attached to this PDE. We finish with providing some outlooks concerning the design of finite difference schemes for solving PDEs, as well as the case of nonlinear PDEs.

      These results are a joint work with Pascal Noble (IMT/INSA) and Olivier Roustant (IMT/INSA), which was funded by the Service Hydrographique et Océanographique de la Marine (SHOM).

      Speaker: Iain HENDERSON (Institut de Mathématiques de Toulouse)
    • 4
      Physics-informed polynomial chaos expansion

      Surrogate modeling of costly mathematical models representing physical systems is challenging since it is necessary to fulfill physical constraints in the whole design domain together with specific boundary conditions of investigated systems. Moreover, it is typically not possible to create a large experimental design covering whole input space due to computational burden of original models. Therefore there has been recently a considerable interest in developing surrogate models capable of satisfying physical constraints – spawning an entirely new field of physics-informed machine learning. In this lecture, a recently introduced methodology for the construction of physics-informed polynomial chaos expansion (PC2) that combines the conventional experimental design with additional constraints from the physics of the model will be presented. Physical constraints in PC2 can be represented by a set of differential equations and specified boundary conditions allowing surrogate model to be constructed more accurately with fewer physics-based model evaluations. Although the main purpose of the PC2 lies in combining data and physical constraints, it is also possible to construct surrogate model only from differential equations and boundary conditions alone without requiring evaluations of the original model. It is well known that a significant advantage of surrogate models in form of polynomial chaos expansions are their possibilities in uncertainty quantification including statistical and sensitivity analysis. Efficient uncertainty quantification by PC2 can be performed through analytical post-processing of a reduced basis filtering out the influence of all deterministic space-time variables. Various examples of PDEs with random parameters will be presented to show the efficiency and versatility of PC2 and its benefit for uncertainty quantification.

      Speaker: Lukas NOVAK (Brno University of Technology)
    • 5:45 PM
  • Tuesday, December 5
    • 5
      Représentation Neural Implicit pour des méthodes numériques hybrides

      Dans une première partie, nous introduirons les méthodes numériques basées sur des représentations Neural Implicit que sont les PINNs et la méthode Neural Galerkin. Nous tenterons de montrer, que ces méthodes, bien qu'ayant des propriétés bien différentes des méthodes numériques usuelles pour les EDP, restent proche dans l'esprit des méthodes classiques. Après avoir discuté des forces et des faiblesses de ces nouvelles approches, nous introduirons des méthodes hybrides combinant PINNs d'un coté et méthodes Eléments Finis ou Galerkin Discontinu de l'autre. Nous discuterons rapidement la convergence de ces approches, que nous illustrerons numériquement.

      Speaker: Emmanuel FRANCK (INRIA)
    • 6
      Deep augmented physical models: application to reinforcement learning and computer vision

      Modelling and forecasting complex physical systems with only partial knowledge of their dynamics is a major challenge across various scientific fields. Model Based (MB) approaches typically rely on ordinary or partial differential equations (ODE/PDE) and stem from a deep understanding of the underlying physical phenomena. Machine Learning (ML) and deep learning are more prior agnostic and have become state-of-the-art for many prediction tasks; however, modeling complex physical dynamics is still beyond the scope of pure ML methods, which often cannot properly extrapolate to new conditions as MB approaches do. Combining the MB and ML paradigms is an emerging trend to develop the interplay between the two paradigms. In this talk, we will present a principled training scheme called APHYNITY [1] for augmenting incomplete physical models with machine learning, with uniqueness guarantees. We will also present an application of augmented models to model-based reinforcement learning [2], where we show gains of performances compared to simplified physical models and data efficiency compared to pure data-driven models. We will also present an application to optical flow estimation [3], where we leverage the classical brightness constancy assumption.

      [1] Yuan Yin, Vincent Le Guen, Jérémie Dona, Emmanuel de Bézenac, Ibrahim Ayed, Nicolas Thome & Patrick Gallinari. Augmenting physical models with deep networks for complex dynamics forecasting. International Conference on Learning Représentations (ICLR) 2021.

      [2] Zakariae El Asri, Clément Rambour, Vincent Le Guen and Nicolas Thome, « Residual Model-Based Reinforcement Learning for Physical Dynamics », NeurIPS 2022 Offline RL workshop.

      [3] Vincent Le Guen, Nicolas Thome, Clément Rambour, « Complementing Brightness Constancy with Deep Networks for Optical Flow Prediction », European Conference on Computer Vision (ECCV) 2022

      Speaker: Vincent LE GUEN (EDF R&D)
    • 10:30 AM
    • 10:45 AM
      Coffee break
    • 7
      Tensor networks and optimal sampling in physics informed machine learning

      Many parametric PDEs have solutions that possess a high degree of regularity with respect to their parameters. Low-rank tensor formats can leverage this regularity to overcome the curse of dimensionality and achieve optimal convergence rates in a wide range of approximation spaces. A particular advantage of these formats is their highly structured nature, which enables us to control the approximation error and sample complexity bounds. In this presentation, we will explore how to take advantage of these benefits to effectively learn the solutions of parametric PDEs.

      Speaker: Philipp TRUNSCHKE (Université de Nantes)
    • 12:00 PM