- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Your profile timezone:
The aim of this 3-day workshop is to bring together experts and young researchers in Calculus of Variations with applications in different areas of physics, mechanics and image processing.
This conference features 12 invited plenary lectures and 10 contributing talks.
Registrations are open from November 1st, 2021 till June, 1st 2022.
Funding is no longer available for new participants.
Let
The curvature functionals (such as the Willmore functional) are usually defined under
In this talk I will consider the following problem of isoperimetric type:
Given a set E in
We can show that the answer is positive if the dimension
(However we know that the answer is positive even for
This is a work in progress with Alan Chang (Princeton University).
In the limit of vanishing but moderate external magnetic field, we derived a few years ago together with S. Conti, F. Otto and S. Serfaty a branched transport problem from the full Ginzburg–Landau model. In this regime, the irrigated measure is the Lebesgue measure and, at least in a simplified 2d setting, it is possible to prove that the minimizer is a self-similar branching tree. In the regime of even smaller magnetic fields, a similar limit problem is expected but this time the irrigation of the Lebesgue measure is not imposed as a hard constraint but rather as a penalization. While an explicit computation of the minimizers seems here out of reach, I will present some ongoing project with G. De Philippis and B. Ruffini relating local energy bounds to dimensional estimates for the irrigated measure.
We define a new rearrangement, called rearrangement by tamping, for non-negative measurable functions defined on
Contrary to the Schwarz rearrangement, the tamping also preserves the homogeneous Dirichlet boundary condition of a function. This presentation aims at presenting the construction of the rearrangement by tamping (with an algorithmic approach) and some recent developments around this idea.
We consider an incompressible Stokes fluid contained in a box
Together with Felix Otto, Richard Schubert, and other collaborators, we have developed two different energy-based methods to capture convergence rates and metastability of gradient flows. We will present the methods and their application to the two model problems that drove their development: the 1-d Cahn–Hilliard equation and the Mullins–Sekerka evolution. Both methods can be viewed as quantifying “how nonconvex“ or “how nonlinear“ a problem can be while still retaining the optimal convergence rates, i.e., the rates for the convex or linear problem. Our focus is on fairly large (ill-prepared) initial data.
Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. Most inverse problems of interest are ill-posed and require appropriate mathematical treatment for recovering meaningful solutions. Variational regularization is one of the main mechanisms to turn inverse problems into well-posed ones by adding prior information about the unknown quantity to the problem, often in the form of assumed regularity of solutions. Classically, such regularization approaches are handcrafted. Examples include Tikhonov regularization, the total variation and several sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically and computationally robust solutions to inverse problems, providing a universal approach to their solution, they are also limited by our ability to model solution properties and to realise these regularization approaches computationally. Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives regularization approaches for inverse problems in a data driven way. Here, regularization is not mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately selected (and usually plenty of) training data. In this talk, I will review some machine learning based regularization techniques, present some work on unsupervised and deeply learned convex regularisers and their application to image reconstruction from tomographic and blurred measurements, and finish by discussing some open mathematical problems.
Discrete to continuum convergence results for graph-based learning have seen an increased interest in the last years. In particular, the connections between discrete machine learning and continuum partial differential equations or variational problems, lead to new insights and better algorithms.
This talk considers Lipschitz learning — which is the limit of
We consider an imaging inverse problem which consists in recovering a “simple” function from a set of noisy linear measurements. Our approach is variationnal: we produce an approximation of the unknown function by solving a least squares problem with a total variation regularization term. Our aim is to prove this approximation converges to the unknown function in a low noise regime. Specifically, we are interested in a convergence of “geometric” type: convergence of the level sets, of the number of non-trivial level sets, etc. This result is closely related to stability questions for solutions of the prescribed curvature problem. This is a joint work with Vincent Duval and Yohann De Castro.
The question of producing a foliation of the
We consider the Ginzburg-Landau energy
We also discuss the problem of vortex sheet
The aim of this talk is to present results on the asymptotic analysis of a fractional version of the vectorial Allen–Cahn equation with multiple-well in arbitrary dimension. In contrast to usual Allen–Cahn equations, the Laplace operator is replaced by the fractional Laplacian as defined in Fourier space. Our results concern the singular limit
Let
where
The aim of the talk is to illustrate the following
We consider the behaviour as
Part of the work is done in collaboration with Xavier Lamy.
Inspired by a recent result of Lauteri and Luckhaus, we derive, via Gamma convergence, a surface tension model for polycrystals in dimension two. The starting point is a semi-discrete model accounting for the possibility of having crystal defects. The presence of defects is modelled by incompatible strain fields with quantised curl. In the limit as the lattice spacing tends to zero we obtain an energy for grain boundaries that depends on the relative angle of the orientations of the two neighbouring grains. The energy density is defined through an asymptotic cell problem formula. By means of the bounds obtained by Lauteri and Luckhaus we also show that the energy density exhibits a logarithmic behaviour for small angle grain boundaries in agreement with the classical Read and Shockley formula.
The talk is based on a paper in preparation in collaboration with Emanuele Spadaro.
This joint work with Jean-François Babadjian is devoted to showing a discrete adaptative finite element approximation result for the isotropic two-dimensional Griffith energy arising in fracture mechanics. The problem is addressed in the geometric measure theoretic framework of generalized special functions of bounded deformation which corresponds to the natural energy space for this functional. It is proved to be approximated in the sense of
Motivated by the crystallization issue, we focus on the minimization of Heitman–Radin potential energies for configurations of
Iteratively reweighted least square (IRLS) is a popular approach to solve sparsity-enforcing regression problems in machine learning. State of the art approaches are more efficient but typically rely on specific coordinate pruning schemes. In this work, we show how a surprisingly simple reparametrization of IRLS, coupled with a bilevel resolution (instead of an alternating scheme) is able to achieve top performances on a wide range of sparsity (such as Lasso, group Lasso and trace norm regularizations), regularization strength (including hard constraints), and design matrices (ranging from correlated designs to differential operators). Similarly to IRLS, our method only involves linear systems resolutions, but in sharp contrast, corresponds to the minimization of a smooth function. Despite being non-convex, we show that there is no spurious minima and that saddle points are “ridable”, so that there always exists a descent direction. We thus advocate for the use of a BFGS quasi-Newton solver, which makes our approach simple, robust and efficient. At the end of the talk, I will discuss the associated gradient flows as well as the connection with Hessian geometry and mirror descent. This is a joint work with Clarice Poon (Bath Univ.). The corresponding article is available: https://arxiv.org/abs/2106.01429. A python notebook introducing the method is available at this address: https://nbviewer.org/github/gpeyre/numerical-tours/blob/master/python/optim_7_noncvx_pro.ipynb
In this talk, we discuss a data-driven approach to viscous fluid mechanics. Typically, in order to describe the behaviour of fluids, two different kinds of modelling assumptions are used. On the one hand, there are first principles like the balance of forces or the incompressibility condition. On the other hand there are material specific constitutive laws that describe the relation between the strain and the viscous stress of the fluid. Combining both, one obtains the partial differential equations of fluid mechanics like the Stokes or Navier–Stokes equations. The constitutive laws are obtained by fitting a law from a certain class (for example linear, power law, etc.) to experimental data. This leads to modelling errors.
Instead of using a constitutive relation, we introduce a data-driven formulation that has previously been examined in the context of solid mechanics and directly draws on material data. This leads to a variational solution concept, that incorporates differential constraints coming from first principles and produces fields that are optimal in terms of closeness to the data. In order to derive this formulation we recast the differential constraints of fluid mechanics in the language of constant-rank differential operators. We show a
Furthermore, we will see that the data-driven solutions are consistent with PDE solutions if the data are given by a constitutive law and discuss advantages of this new solution concept.
I will recall the classical theory of convex duality and explain how this can be used to obtain regularity statements in the study of minimisers of the problem
Entropic optimal transport (EOT) has received a lot of attention in recent years because it is related to efficient solvers. In this talk, I will address the rate of convergence of the value to the optimal transport cost as the noise parameter vanishes. This is a joint work with Paul Pegon and Luca Tamanini.