Orateur
Description
We propose a novel Wasserstein distributionally robust optimization (DRO) framework with regularization control that naturally leads to a family of regularized problems with user‐controllable penalization mechanisms. Our approach bridges the gap between conventional DRO formulations and practical decision-making by explicitly incorporating adverse scenario information into the optimization process, thereby enhancing robustness under unfavorable data conditions. In many stochastic programming applications, standard DRO methods based on Wasserstein distances could fail to capture decision-dependent perturbations or to adequately account for known out-of-sample adverse events. In contrast, our framework integrates adverse scenario information to yield solutions that remain resilient when conditions deteriorate. We provide rigorous finite-sample guarantees and show that, as the sample size increases, the corresponding regularization parameter scales appropriately so that both the optimal value and decision variables converge to those of the true stochastic program; in this way, the accumulation points of our solution sequence are optimal. Extensive numerical experiments on applications including the newsvendor problem and portfolio optimization illustrate that incorporating adverse scenarios into the regularization term can allow obtaining an advantageous balance between out-of-sample performance and robustness without sacrificing efficiency in stable environments.