28 juillet 2025 à 1 août 2025
Fuseau horaire Europe/Paris

On adaptive kernel learning

28 juil. 2025, 17:30
30m
F207

F207

Contributed talk Machine learning ML

Orateur

Vladimir Norkin (V.M.Glushkov Institute of Cybernetics)

Description

The study proposes and explores a wide area of application of machine learning methods in applied mathematics, namely, parametric analysis of mathematical models by machine learning methods, in particular, by adaptive kernel support vector machines. The kernel support vector machine is extended by the ability to use a continuum (multivariate parametric) family of kernels to approximate the dependence under study. For example, it is proposed to use variable width kernels and variable anisotropic kernels to reduce the approximation error. Thus the standard for kernel learning Reproducing Kernel Hilbert Space is replaced by a much broader kernel subspace of square integrable functions. While kernels in RKHS act on functions (via the inner product) as Dirac’s delta-function, kernels in L_2 act on functions as smoothing/mollifying operators. An essential problem in machine learning is selecting regularization, in particular the scale of the regularization parameter. In the paper, we select the regularization parameter by minimization of the approximation error on training and/or test data. To train the adaptive support vector machine, it is proposed to use a combination of traditional kernel weights optimization methods (quadratic as in SVM) and global optimization methods to optimize kernel parameters.

Author

Vladimir Norkin (V.M.Glushkov Institute of Cybernetics)

Documents de présentation

Aucun document.