With the development of new experimental technologies, biologists are faced with an avalanche of data to be computationally analyzed for scientific advancements and discoveries to emerge. Faced with the complexity of analysis pipelines, the large number of computational tools, and the enormous amount of data to manage, there is compelling evidence that many (if not most) scientific...
Determinantal point processes are a very powerful tool in probability theory, especially for integrable systems, because they allow to get very concise closed form formulas and simplify a lot of computations. This is one reason why they have become very attractive in machine learning. Another reason is that, when parametrized by a symmetric matrix, they allow to model repulsive interactions...
The past decade has witnessed a tremendous interest in the concept of sparse representations in signal and image processing. Inverse problems involving sparsity arise in many application fields such as nondestructive evaluation of materials, electroencephalography for brain activity analysis, biological imaging, or fluid mechanics, to name a few. In this lecture, I will introduce well-known...
In the setting of supervised learning using kernel methods, while the least-square (prediction) error is classically the performance measure of interest, if the true target function is assumed to be an element of a Hilbert space, one can also be interested in the norm of the error of an estimator in that space (reconstruction error); this is of particular relevance in inverse problems where...
This work studies an explicit embedding of the set of probability measures into a Hilbert space, defined using optimal transport maps from a reference probability density. This embedding linearizes to some extent the $2$-Wasserstein space, and enables the direct use of generic supervised and unsupervised learning algorithms on measure data. Our main result is that the embedding is (bi-)Hölder...