Orateur
Description
In the setting of supervised learning using kernel methods, while the least-square (prediction) error is classically the performance measure of interest, if the true target function is assumed to be an element of a Hilbert space, one can also be interested in the norm of the error of an estimator in that space (reconstruction error); this is of particular relevance in inverse problems where the observed signal is the target after passing through a known linear operator. When the regularity (in a certain sense) of the target is known, a common regularization parameter can achieve optimal minimax error rates in both norms. When the regularity is unknown (which is usually the case), we address the question of data-dependent selection rule of a regularization parameter that is adaptive to the unknown regularity of the target function and is optimal both for the prediction error and for the reproducing kernel Hilbert space (reconstruction) norm error by proposing a modified Lepskii balancing principle using a varying family of norms. (Based on joint work with P. Mathé, N. Mücke).