28 juillet 2025 à 1 août 2025
Fuseau horaire Europe/Paris

Condition Number Shrinkage by Joint Distributionally Robust Covariance-Precision Estimation

1 août 2025, 10:45
30m
F102

F102

Contributed talk (Distributionally) robust optimization (Distributionally) Robust Optimization

Orateur

M. Renjie Chen (The Chinese University of Hong Kong)

Description

We propose a distributionally robust formulation for the simultaneous estimation of the covariance and precision matrix of a random vector. The proposed model minimizes the worst-case weighted sum of the Stein's loss of the precision matrix estimator and the Frobenius loss of the covariance estimator against all distributions from an ambiguity set centered at the empirical distribution. The radius of the ambiguity set is measured via convex spectral divergences. We show that the proposed distributionally robust estimation model reduces to convex optimization problems and thus gives rise to computationally tractable estimators. The estimators, as an outcome of distributional robustness, are shown to be nonlinear shrinkage estimators. The eigenvalues of the estimators are shrunk nonlinearly towards a scalar matrix, where the scalar is determined by the weight coefficient of the loss terms. We show that the shrinkage effect improves the condition number of the estimator. We provide the explicit expression of the shrinkage estimators induced by Kullback-Leibler divergence, Wasserstein divergence, Symmetrized Stein divergence, and Frobenius divergence. We also provide a parameter-tuning scheme that adjusts the shrinkage target and intensity that is asymptotically optimal. Numerical experiments on synthetic and real data show that our shrinkage estimators perform competitively against state-of-the-art estimators in practical applications.

Author

M. Renjie Chen (The Chinese University of Hong Kong)

Co-auteurs

Prof. Huifu Xu (The Chinese University of Hong Kong) Prof. Viet Anh Nguyen (The Chinese University of Hong Kong)

Documents de présentation