Exact converging bounds for Stochastic Dual Dynamic Programming via Fenchel duality
Published in SIAM journal on Optimization, 2020
Recommended citation: Leclere, Vincent, et al. "Exact converging bounds for stochastic dual dynamic programming via fenchel duality." SIAM Journal on Optimization 30.2 (2020): 1223-1250. https://epubs.siam.org/doi/abs/10.1137/19M1258876
The stochastic dual dynamic programming (SDDP) algorithm has become one of the main tools used to address convex multistage stochastic optimal control problems. Recently a large amount of work has been devoted to improving the convergence speed of the algorithm through cut selection and regularization, and to extending the field of applications to nonlinear, integer, or risk-averse problems. However, one of the main downsides of the algorithm remains the difficulty in giving an upper bound of the optimal value, usually estimated through Monte Carlo methods and therefore difficult to use in the stopping criterion of the algorithm. In this paper we present a dual SDDP algorithm that yields a converging exact upper bound for the optimal value of the optimization problem. As an easy consequence of our approach, we show how to compute an alternative control policy based on an inner approximation of Bellman value functions instead of the outer approximation given by the standard SDDP algorithm. We illustrate the approach on an energy production problem involving zones of production and transportation links between the zones. The numerical experiments we carry out on this example show the effectiveness of the method.