Orateur
Description
This paper revisits the theory of \textit{exact regularization} – where optimal solutions of a regularized convex optimization problem exhibit a phase transition phenomenon and eventually coincide with those of the original unregularized problem (under certain conditions).We examine this phenomenon from a norm-free perspective – instead of adopting norm-related assumptions, our results are established on conditions only involving Bregman divergence and convexity. We proved two key results: (1) a norm-free version of Lipschitz continuity of the regularized optimal solution, and (2) a phase-transition threshold for the exact regularization to hold that depends solely on intrinsic problem parameters. Notably, our norm-free framework generalizes classical norm-dependent conditions, such as strong convexity of the regularization function, and broadens applicability. Our theoretical results have applications in many data-driven optimization problems, for example to integrated prediction-optimization, inverse optimization, and decentralized optimization. Our results for exact regularization potentially lead to faster convergence or tighter error bounds in these settings.