Search Results

Now showing 1 - 2 of 2
  • Item
    Stochastic approximation versus sample average approximation for Wasserstein barycenters
    (London [u.a.] : Taylor & Francis, 2022) Dvinskikh, Darina
    In the machine learning and optimization community, there are two main approaches for the convex risk minimization problem, namely the Stochastic Approximation (SA) and the Sample Average Approximation (SAA). In terms of the oracle complexity (required number of stochastic gradient evaluations), both approaches are considered equivalent on average (up to a logarithmic factor). The total complexity depends on a specific problem, however, starting from the work [A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro, Robust stochastic approximation approach to stochastic programming, SIAM. J. Opt. 19 (2009), pp. 1574–1609] it was generally accepted that the SA is better than the SAA. We show that for the Wasserstein barycenter problem, this superiority can be inverted. We provide a detailed comparison by stating the complexity bounds for the SA and SAA implementations calculating barycenters defined with respect to optimal transport distances and entropy-regularized optimal transport distances. As a byproduct, we also construct confidence intervals for the barycenter defined with respect to entropy-regularized optimal transport distances in the ℓ2-norm. The preliminary results are derived for a general convex optimization problem given by the expectation to have other applications besides the Wasserstein barycenter problem.
  • Item
    On the complexity of approximating Wasserstein barycenter
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2019) Kroshnin, Alexey; Dvinskikh, Darina; Dvurechensky, Pavel; Gasnikov, Alexander; Tupitsa, Nazarii; Uribe, César A.
    We study the complexity of approximating Wassertein barycenter of discrete measures, or histograms by contrasting two alternative approaches, both using entropic regularization. We provide a novel analysis for our approach based on the Iterative Bregman Projections (IBP) algorithm to approximate the original non-regularized barycenter. We also get the complexity bound for alternative accelerated-gradient-descent-based approach and compare it with the bound obtained for IBP. As a byproduct, we show that the regularization parameter in both approaches has to be proportional to ", which causes instability of both algorithms when the desired accuracy is high. To overcome this issue, we propose a novel proximal-IBP algorithm, which can be seen as a proximal gradient method, which uses IBP on each iteration to make a proximal step. We also consider the question of scalability of these algorithms using approaches from distributed optimization and show that the first algorithm can be implemented in a centralized distributed setting (master/slave), while the second one is amenable to a more general decentralized distributed setting with an arbitrary network topology.