Conveners
Exposé long: Inference techniques for the analysis of Brownian image textures
- Frederic Richard (Aix-Marseille University)
Exposé long: Unbiased estimation of smooth functions. Applications in statistic and machine learning
- Nicolas Chopin (ENSAE, Institut Polytechnique de Paris)
Exposé long: A new preconditioned stochastic gradient algorithm for estimation in latent variable models
- Maud Delattre (INRAe MaIAGE Jouy-en-Jossas)
Exposé long: Gaussian processes with inequality constraints: Theory and computation
- François Bachoc (Institut de Mathématiques de Toulouse)
Exposé long: Building explainable and robust neural networks by using Lipschitz constraints and Optimal Transport
- Mathieu Serrurier (IRIT Toulouse)
Exposé long: Nonparametric Bayesian mixture models for identifying clusters from longitudinal and cross-sectional data
- Anaïs Rouanet (ISPED, Université de Bordeaux)
Exposé long: Polya urn models for multivariate species abundance data and neutral theory of biodiversity
- Jean Peyhardi (Institut Montpelliérain Alexander Grothendieck (IMAG))
Exposé long: Conformal prediction for object detection
- Sébastien Gerchinovitz (IRT Saint Exupéry)
In this talk, I will present some techniques for estimating the functional parameters of anisotropic fractional Brownian fields, and their application to the analysis of image textures. I will focus on a first approach based on the resolution of inverse problems which leads to a complete estimation of parameters. The formulation of these inverse problems comes from the fitting of the empirical...
Given a smooth function f, we develop a general approach to turn Monte Carlo samples with expectation m into an unbiased estimate of f(m). Specifically, we develop estimators that are based on randomly truncating the Taylor series expansion of f and estimating the coefficients of the truncated series. We derive their properties and propose a strategy to set their tuning parameters -- which...
Latent variable models are powerful tools for modeling complex phenomena involving in particular partially observed data, unobserved variables or underlying complex unknown structures. Inference is often difficult due to the latent structure of the model. To deal with parameter estimation in the presence of latent variables, well-known efficient methods exist, such as gradient-based and...
In Gaussian process modeling, inequality constraints enable to take expert knowledge into account and thus to improve prediction and uncertainty quantification. Typical examples are when a black-box function is bounded or monotonic with respect to some of its input variables. We will show how inequality constraints impact the Gaussian process model, the computation of its posterior...
The lack of robustness and explainability in neural networks is directly linked to the arbitrarily high Lipschitz constant of deep models. Although constraining the Lipschitz constant has been shown to improve these properties, it can make it challenging to learn with classical loss functions. In this presentation, we explain how to control this constant, and demonstrate that training such...
The identification of sets of co-regulated genes that share a common function is a key question of modern genomics. Bayesian profile regression is a semi-supervised mixture modelling approach that makes use of a response to guide inference toward relevant clusterings. Previous applications of profile regression have considered univariate continuous, categorical, and count outcomes. In this...
This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the...
We address the problem of constructing reliable uncertainty estimates for object detection. We build upon classical tools from Conformal Prediction, which offer (marginal) risk guarantees when the predictive uncertainty can be reduced to a one-dimensional parameter. In this talk, we will first recall standard algorithms and theoretical guarantees in conformal prediction and beyond. We will...
In numerous fields, understanding the characteristics of a large-scale marked point process (called the ground process) is crucial, particularly when the marks are latent and can only be inferred through sampling. To achieve this, we need to gather an efficient sample, which involves thinning the ground process. To address this challenge, we introduce a sequential thinning approach tailored...