BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Maximum Entropy Distributions for Image Synthesis under Statistica
l Constraints
DTSTART;VALUE=DATE-TIME:20210205T130000Z
DTEND;VALUE=DATE-TIME:20210205T134000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4938@indico.math.cnrs.fr
DESCRIPTION:Speakers: Agnès Desolneux (Centre Borelli)\nThe question of t
exture synthesis in image processing is a very challenging problem that ca
n be stated as followed: given an exemplar image\, sample a new image that
has the same statistical features (empirical mean\, empirical covariance\
, filter responses\, neural network responses\, etc.). Exponential models
then naturally arise as distributions satisfying these constraints in expe
ctation while being of maximum entropy. Now the parameters of these expone
ntial models need to be estimated and samples have to be drawn. I will ex
plain how these can be done simultaneously through the SOUL (Stochastic Op
timization with Unadjusted Langevin) algorithm. This is based on a joint w
ork with Valentin de Bortoli\, Alain Durmus\, Bruno Galerne and Arthur Lec
laire.\n\nhttps://indico.math.cnrs.fr/event/6351/contributions/4938/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4938/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deep Neural Network for Audio and Music Transformations
DTSTART;VALUE=DATE-TIME:20210205T143000Z
DTEND;VALUE=DATE-TIME:20210205T151000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4941@indico.math.cnrs.fr
DESCRIPTION:Speakers: Gaël Richard (Télécom Paris)\nWe will first discu
ss how deep learning techniques can be used for audio signals. To that aim
\, we will recall some of the important characteristics of an audio signal
and review some of the main deep learning architectures and concepts used
in audio signal analysis. We will then illustrate some of these concepts
in more details with two applications\, namely informed singing voice sour
ce separation and music style transfer.\n\nhttps://indico.math.cnrs.fr/eve
nt/6351/contributions/4941/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4941/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Analysis of Gradient Descent on Wide Two-Layer Neural Networks
DTSTART;VALUE=DATE-TIME:20210205T102000Z
DTEND;VALUE=DATE-TIME:20210205T110000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4937@indico.math.cnrs.fr
DESCRIPTION:Speakers: Lenaïc Chizat (LMO)\nArtificial neural networks are
a class of "prediction" functions parameterized by a large number of para
meters -- called weights -- that are used in various machine learning task
s (classification\, regression\, etc). Given a learning task\, the weights
are adjusted via a gradient-based algorithm so that the corresponding pre
dictor achieves a good performance on a given training set. In this talk\,
we propose an analysis of gradient descent on wide two-layer ReLU neural
networks for supervised machine learning tasks\, that leads to sharp chara
cterizations of the learned predictor. The main idea is to study the dynam
ics when the width of the hidden layer goes to infinity\, which is a Wasse
rstein gradient flow. While this dynamics evolves on a non-convex landscap
e\, we show that its limit is a global minimizer if initialized properly.
We also study the "implicit bias" of this algorithm when the objective is
the unregularized logistic loss: among the many global minimizers\, we sho
w that it selects a specific one which is a max-margin classifier in a cer
tain functional space. We finally discuss what these results tell us about
the generalization performance and the adaptivity to low dimensional stru
ctures of neural networks. This is based on joint work with Francis Bach.\
n\nhttps://indico.math.cnrs.fr/event/6351/contributions/4937/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4937/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Supervised Learning with Missing Values
DTSTART;VALUE=DATE-TIME:20210205T093000Z
DTEND;VALUE=DATE-TIME:20210205T101000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4936@indico.math.cnrs.fr
DESCRIPTION:Speakers: Gaël Varoquaux (INRIA Parietal)\nSome data come wi
th missing values. For instance\, a survey’s participant may ignore some
questions. There is an abundant statistical literature on this topic\, es
tablishing for instance how to fit model without biases due to the missing
ness\, and imputation strategies to provide practical solutions to the ana
lyst. In machine learning\, to build models that minimize a prediction ris
k\, most work default to these practices. As we will see\, these different
settings lead to different theoretical and practical solutions.\n\nI will
outline some conditions under which machine-learning models yield the bes
t-possible predictions in the presence of missing values. A striking resul
t is that naive imputation strategies can be optimal\, as the supervised-l
earning model does the hard work [1]. A challenge to fitting a machine-lea
rning model is that there is a combinatorial explosion of possible missing
-values patterns such that even when the output is a linear function of th
e fully-observed data\, the optimal predictor is complex [2]. I will show
how the same dedicated neural architecture can approximate well the optima
l predictor for multiple missing-values mechanisms\, including difficult m
issing-not-at-random settings [3].\n\n[1] Josse\, J.\, Prost\, N.\, Scorne
t\, E.\, & Varoquaux\, G. (2019). On the consistency of supervised learnin
g with missing values. arXiv preprint arXiv:1902.06931.\n\n[2] Le Morvan\,
M.\, Prost\, N.\, Josse\, J.\, Scornet\, E.\, & Varoquaux\, G. (2020). Li
near predictor on linearly-generated data with missing values: non consist
ency and solutions. AISTATS 2020.\n\nhttps://indico.math.cnrs.fr/event/635
1/contributions/4936/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4936/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Input Similarity from the Neural Network Perspective
DTSTART;VALUE=DATE-TIME:20210205T135000Z
DTEND;VALUE=DATE-TIME:20210205T143000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4940@indico.math.cnrs.fr
DESCRIPTION:Speakers: Guillaume Charpiat (LRI)\nGiven a trained neural net
work\, we aim at understanding how similar it considers any two samples. F
or this\, we express a proper definition of similarity from the neural net
work perspective (i.e. we quantify how undissociable two inputs A and B ar
e)\, by taking a machine learning viewpoint: how much a parameter variatio
n designed to change the output for A would impact the output for B as wel
l?\n\nWe study the mathematical properties of this similarity measure\, an
d show how to estimate sample density with it\, in low complexity\, enabli
ng new types of statistical analysis for neural networks. We also propose
to use it during training\, to enforce that examples known to be similar s
hould also be seen as similar by the network.\n\nWe then study the self-de
noising phenomenon encountered in regression tasks when training neural ne
tworks on datasets with noisy labels. We exhibit a multimodal image regist
ration task where almost perfect accuracy is reached\, far beyond label no
ise variance. Such an impressive self-denoising phenomenon can be explaine
d as a noise averaging effect over the labels of similar examples. We anal
yze data by retrieving samples perceived as similar by the network\, and a
re able to quantify the denoising effect without requiring true labels.\n\
nhttps://indico.math.cnrs.fr/event/6351/contributions/4940/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4940/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deep Unfolding of a Proximal Interior Point Method for Image Resto
ration
DTSTART;VALUE=DATE-TIME:20210205T110000Z
DTEND;VALUE=DATE-TIME:20210205T114000Z
DTSTAMP;VALUE=DATE-TIME:20211204T013913Z
UID:indico-contribution-6351-4939@indico.math.cnrs.fr
DESCRIPTION:Speakers: Emilie Chouzenoux (CVN)\nVariational methods have st
arted to be widely applied to ill-posed inverse problems since they have t
he ability to embed prior knowledge about the solution. However\, the leve
l of performance of these methods significantly depends on a set of parame
ters\, which can be estimated through computationally expensive and time-c
onsuming processes. In contrast\, deep learning offers very generic and ef
ficient architectures\, at the expense of explainability\, since it is oft
en used as a black-box\, without any fine control over its output. Deep un
folding provides a convenient approach to combine variational-based and de
ep learning approaches. Starting from a variational formulation for image
restoration\, we develop iRestNet [1]\, a neural network architecture obta
ined by unfolding an interior point proximal algorithm. Hard constraints\,
encoding desirable properties for the restored image\, are incorporated i
nto the network thanks to a logarithmic barrier\, while the barrier parame
ter\, the stepsize\, and the penalization weight are learned by the networ
k. We derive explicit expressions for the gradient of the proximity operat
or for various choices of constraints\, which allows training iRestNet wit
h gradient descent and backpropagation. In addition\, we provide theoretic
al results regarding the stability of the network. Numerical experiments o
n image deblurring problems show that the proposed approach outperforms bo
th state-of-the-art variational and machine learning methods in terms of i
mage quality.\n\n[1] C. Bertocchi\, E. Chouzenoux\, M.-C. Corbineau\, J.-C
. Pesquet and M. Prato. Deep Unfolding of a Proximal Interior Point Method
for Image Restoration. Inverse Problems\, vol. 36\, pp. 034005\, 2020.\n\
nhttps://indico.math.cnrs.fr/event/6351/contributions/4939/
LOCATION:Le Bois-Marie
URL:https://indico.math.cnrs.fr/event/6351/contributions/4939/
END:VEVENT
END:VCALENDAR