Accelerated variance-reduced methods for saddle-point problems

Loading...
Thumbnail Image
Date
2022
Volume
10
Issue
Journal
EURO journal on computational optimization
Series Titel
Book Title
Publisher
Amsterdam : Elsevier
Abstract

We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.

Description
Keywords
Citation
Borodich, E., Tominin, V., Tominin, Y., Kovalev, D., Gasnikov, A., & Dvurechensky, P. (2022). Accelerated variance-reduced methods for saddle-point problems (Amsterdam : Elsevier). Amsterdam : Elsevier. https://doi.org//10.1016/j.ejco.2022.100048
Collections
License
CC BY 4.0 Unported