-
Prof. Sarah Cohen-Boulakia (LRI, Paris-Sud)27/01/2020 10:00
With the development of new experimental technologies, biologists are faced with an avalanche of data to be computationally analyzed for scientific advancements and discoveries to emerge. Faced with the complexity of analysis pipelines, the large number of computational tools, and the enormous amount of data to manage, there is compelling evidence that many (if not most) scientific...
Aller à la page de la contribution -
Prof. Victor-Emmanuel Brunel (ENSEA/CREST)27/01/2020 11:20
Determinantal point processes are a very powerful tool in probability theory, especially for integrable systems, because they allow to get very concise closed form formulas and simplify a lot of computations. This is one reason why they have become very attractive in machine learning. Another reason is that, when parametrized by a symmetric matrix, they allow to model repulsive interactions...
Aller à la page de la contribution -
Prof. Steve Oudot (INRIA)27/01/2020 12:10
This talk will be a review of the efforts of the Topological Data Analysis (TDA) community to tackle the preimage problem. After a general introduction on TDA, the main focus will be on recent attempts to invert the TDA operator. While this line of work is still in its infancy, the hope on the long run is to use such inverses for feature interpretation. The mathematical tools involved in the...
Aller à la page de la contribution -
Prof. Charles Soussen (CentraleSupélec)27/01/2020 14:20
The past decade has witnessed a tremendous interest in the concept of sparse representations in signal and image processing. Inverse problems involving sparsity arise in many application fields such as nondestructive evaluation of materials, electroencephalography for brain activity analysis, biological imaging, or fluid mechanics, to name a few. In this lecture, I will introduce well-known...
Aller à la page de la contribution -
Prof. Gilles Blanchard (IHES)27/01/2020 15:30
In the setting of supervised learning using kernel methods, while the least-square (prediction) error is classically the performance measure of interest, if the true target function is assumed to be an element of a Hilbert space, one can also be interested in the norm of the error of an estimator in that space (reconstruction error); this is of particular relevance in inverse problems where...
Aller à la page de la contribution -
Prof. Quentin Merigot (Paris-Sud)27/01/2020 16:20
This work studies an explicit embedding of the set of probability measures into a Hilbert space, defined using optimal transport maps from a reference probability density. This embedding linearizes to some extent the $2$-Wasserstein space, and enables the direct use of generic supervised and unsupervised learning algorithms on measure data. Our main result is that the embedding is (bi-)Hölder...
Aller à la page de la contribution
Choisissez le fuseau horaire
Le fuseau horaire de votre profil: