Présidents de session
Plenary session: Francis Bach
- Guanghui Lan (Georgia Tech)
Plenary session: Alois Pichler
- Andrzej Ruszczynski (Rutgers University)
Plenary session: Huifu Xu
- Daniel Kuhn (EPFL)
Plenary session: Jim Luedtke
- David Morton
Plenary session: Francesca Maggioni
- Guzin Bayraksan (The Ohio State University)
Plenary session: Erick Delage
- Claudia Sagastizábal
We consider the distance of probability measures from varying angles. We discuss balanced and unbalanced transport, we consider entropic regularization and the maximum mean discrepancy distance.
Quantization is the approximation of probability measures by simple and discrete measures. The quantization measures behave differently in these metrics – an aspect, which the talk addresses as well.
Preference robust optimization (PRO) is a relatively new area of robust optimization. In this talk, I give an overview of recent research on utility-based PRO models and computational methods primarily conducted by my collaborators and myself over the past few years. I begin with a description on one-stage maximin utility PRO model where the true utility function representing the decision...
Stochastic integer programs model problems where discrete decisions must be made under uncertainty. This combination provides significant modeling power, leading to wide a wide variety of applications such as supply chain network design, power systems design and operations, and service systems design and operations. This combination also leads to computational challenges due to the need to...
Many real world decision problems are dynamic and affected by uncertainty. Stochastic Programming provides a powerful approach to handle this uncertainty within a multi-period decision framework. However, as the number of stages increases, the computational complexity of these problems grows exponentially, posing significant challenges. To tackle this, approximation techniques are often used...
This talk surveys recent developments in reinforcement learning (RL) methods for risk-aware model-free decision-making in Markov decision processes (MDPs). In the discounted setting, we adapt two popular risk neutral RL methods to account for risk aversion. The first approach minimizes a dynamic utility-based shortfall risk measure, while the other optimizes a specific quantile of the total...