Search Results

Now showing 1 - 10 of 18
  • Item
    Gradient methods for problems with inexact model of the objective
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Stonyakin, Fedor; Dvinskikh, Darina; Dvurechensky, Pavel; Kroshnin, Alexey; Kuznetsova, Olesya; Agafonov, Artem; Gasnikov, Alexander; Tyurin, Alexander; Uribe, Cesar A.; Pasechnyuk, Dmitry; Artamonov, Sergei
    We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [19] and relative smoothness condition [43]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [49]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments.
  • Item
    On the optimal combination of tensor optimization methods
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Kamzolov, Dmitry; Gasnikov, Alexander; Dvurechensky, Pavel
    We consider the minimization problem of a sum of a number of functions having Lipshitz p -th order derivatives with different Lipschitz constants. In this case, to accelerate optimization, we propose a general framework allowing to obtain near-optimal oracle complexity for each function in the sum separately, meaning, in particular, that the oracle for a function with lower Lipschitz constant is called a smaller number of times. As a building block, we extend the current theory of tensor methods and show how to generalize near-optimal tensor methods to work with inexact tensor step. Further, we investigate the situation when the functions in the sum have Lipschitz derivatives of a different order. For this situation, we propose a generic way to separate the oracle complexity between the parts of the sum. Our method is not optimal, which leads to an open problem of the optimal combination of oracles of a different order.
  • Item
    Inexact tensor methods and their application to stochastic convex optimization
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Agafonov, Artem; Kamzolov, Dmitry; Dvurechensky, Pavel; Gasnikov, Alexander
    We propose a general non-accelerated tensor method under inexact information on higher- order derivatives, analyze its convergence rate, and provide sufficient conditions for this method to have similar complexity as the exact tensor method. As a corollary, we propose the first stochastic tensor method for convex optimization and obtain sufficient mini-batch sizes for each derivative.
  • Item
    Tensor methods for strongly convex strongly concave saddle point problems and strongly monotone variational inequalities
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Ostroukhov, Petr; Kamalov, Rinat; Dvurechensky, Pavel; Gasnikov, Alexander
    In this paper we propose three tensor methods for strongly-convex-strongly-concave saddle point problems (SPP). The first method is based on the assumption of higher-order smoothness (the derivative of the order higher than 2 is Lipschitz-continuous) and achieves linear convergence rate. Under additional assumptions of first and second order smoothness of the objective we connect the first method with a locally superlinear converging algorithm in the literature and develop a second method with global convergence and local superlinear convergence. The third method is a modified version of the second method, but with the focus on making the gradient of the objective small. Since we treat SPP as a particular case of variational inequalities, we also propose two methods for strongly monotone variational inequalities with the same complexity as the described above.
  • Item
    Inexact model: A framework for optimization and variational inequalities
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Stonyakin, Fedor; Gasnikov, Alexander; Tyurin, Alexander; Pasechnyuk, Dmitry; Agafonov, Artem; Dvurechensky, Pavel; Dvinskikh, Darina; Piskunova, Victorya
    In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities.
  • Item
    Distributed optimization with quantization for computing Wasserstein barycenters
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Krawchenko, Roman; Uribe, César A.; Gasnikov, Alexander; Dvurechensky, Pavel
    We study the problem of the decentralized computation of entropy-regularized semi-discrete Wasserstein barycenters over a network. Building upon recent primal-dual approaches, we propose a sampling gradient quantization scheme that allows efficient communication and computation of approximate barycenters where the factor distributions are stored distributedly on arbitrary networks. The communication and algorithmic complexity of the proposed algorithm are shown, with explicit dependency on the size of the support, the number of distributions, and the desired accuracy. Numerical results validate our algorithmic analysis.
  • Item
    Stopping rules for accelerated gradient methods with additive noise in gradient
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Vasin, Artem; Gasnikov, Alexander; Spokoiny, Vladimir
    In this article, we investigate an accelerated first-order method, namely, the method of similar triangles, which is optimal in the class of convex (strongly convex) problems with a Lipschitz gradient. The paper considers a model of additive noise in a gradient and a Euclidean prox- structure for not necessarily bounded sets. Convergence estimates are obtained in the case of strong convexity and its absence, and a stopping criterion is proposed for not strongly convex problems.
  • Item
    Alternating minimization methods for strongly convex optimization
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Tupitsa, Nazarii; Dvurechensky, Pavel; Gasnikov, Alexander
    We consider alternating minimization procedures for convex optimization problems with variable divided in many block, each block being amenable for minimization with respect to its variable with freezed other variables blocks. In the case of two blocks, we prove a linear convergence rate for alternating minimization procedure under Polyak-Łojasiewicz condition, which can be seen as a relaxation of the strong convexity assumption. Under strong convexity assumption in many-blocks setting we provide an accelerated alternating minimization procedure with linear rate depending on the square root of the condition number as opposed to condition number for the non-accelerated method.
  • Item
    Flexible modification of Gauss--Newton method and its stochastic extension
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Yudin, Nikita; Gasnikov, Alexander
    This work presents a novel version of recently developed Gauss--Newton method for solving systems of nonlinear equations, based on upper bound of solution residual and quadratic regularization ideas. We obtained for such method global convergence bounds and under natural non-degeneracy assumptions we present local quadratic convergence results. We developed stochastic optimization algorithms for presented Gauss--Newton method and justified sub-linear and linear convergence rates for these algorithms using weak growth condition (WGC) and Polyak--Lojasiewicz (PL) inequality. We show that Gauss--Newton method in stochastic setting can effectively find solution under WGC and PL condition matching convergence rate of the deterministic optimization method. The suggested method unifies most practically used Gauss--Newton method modifications and can easily interpolate between them providing flexible and convenient method easily implementable using standard techniques of convex optimization.
  • Item
    Zeroth-order algorithms for smooth saddle-point problems
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Sadiev, Abdurakhmon; Beznosikov, Aleksandr; Dvurechensky, Pavel; Gasnikov, Alexander
    Saddle-point problems have recently gained an increased attention from the machine learning community, mainly due to applications in training Generative Adversarial Networks using stochastic gradients. At the same time, in some applications only a zeroth-order oracle is available. In this paper, we propose several algorithms to solve stochastic smooth (strongly) convex-concave saddle- point problems using zeroth-order oracles, and estimate their convergence rate and its dependence on the dimension n of the variable. In particular, our analysis shows that in the case when the feasible set is a direct product of two simplices, our convergence rate for the stochastic term is only by a log n factor worse than for the first-order methods. We also consider a mixed setup and develop 1/2th-order methods which use zeroth-order oracle for the minimization part and first-order oracle for the maximization part. Finally, we demonstrate the practical performance of our zeroth-order and 1/2th-order methods on practical problems.