Accelerated variance-reduced methods for saddle-point problems

Abstract

We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.

Description
Keywords
Accelerated algorithms, Composite optimization, Minimax optimization, Saddle-point problem, Stochastic variance-reduced algorithms
Citation
Borodich, E., Tominin, V., Tominin, Y., Kovalev, D., Gasnikov, A., & Dvurechensky, P. (2022). Accelerated variance-reduced methods for saddle-point problems. 10. https://doi.org//10.1016/j.ejco.2022.100048
License
CC BY 4.0 Unported