Search Results

Now showing 1 - 4 of 4
  • Item
    Convergence bounds for empirical nonlinear least-squares
    (Les Ulis : EDP Sciences, 2022) Eigel, Martin; Schneider, Reinhold; Trunschke, Philipp
    We consider best approximation problems in a nonlinear subset ℳ of a Banach space of functions (𝒱,∥•∥). The norm is assumed to be a generalization of the L 2-norm for which only a weighted Monte Carlo estimate ∥•∥n can be computed. The objective is to obtain an approximation v ∈ ℳ of an unknown function u ∈ 𝒱 by minimizing the empirical norm ∥u − v∥n. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the specified nonlinear least squares setting. Several model classes are examined and the analytical statements about the RIP are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
  • Item
    Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
    (Berlin ; Heidelberg : Springer, 2020) Eigel, Martin; Marschall, Manuel; Pfeffer, Max; Schneider, Reinhold
    Stochastic Galerkin methods for non-affine coefficient representations are known to cause major difficulties from theoretical and numerical points of view. In this work, an adaptive Galerkin FE method for linear parametric PDEs with lognormal coefficients discretized in Hermite chaos polynomials is derived. It employs problem-adapted function spaces to ensure solvability of the variational formulation. The inherently high computational complexity of the parametric operator is made tractable by using hierarchical tensor representations. For this, a new tensor train format of the lognormal coefficient is derived and verified numerically. The central novelty is the derivation of a reliable residual-based a posteriori error estimator. This can be regarded as a unique feature of stochastic Galerkin methods. It allows for an adaptive algorithm to steer the refinements of the physical mesh and the anisotropic Wiener chaos polynomial degrees. For the evaluation of the error estimator to become feasible, a numerically efficient tensor format discretization is developed. Benchmark examples with unbounded lognormal coefficient fields illustrate the performance of the proposed Galerkin discretization and the fully adaptive algorithm.
  • Item
    Dynamical low-rank approximations of solutions to the Hamilton--Jacobi--Bellman equation
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Schneider, Reinhold; Sommer, David
    We present a novel method to approximate optimal feedback laws for nonlinar optimal control basedon low-rank tensor train (TT) decompositions. The approach is based on the Dirac-Frenkel variationalprinciple with the modification that the optimisation uses an empirical risk. Compared to currentstate-of-the-art TT methods, our approach exhibits a greatly reduced computational burden whileachieving comparable results. A rigorous description of the numerical scheme and demonstrations ofits performance are provided.
  • Item
    Convergence bounds for empirical nonlinear least-squares
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Eigel, Martin; Trunschke, Philipp; Schneider, Reinhold
    We consider best approximation problems in a nonlinear subset of a Banach space of functions. The norm is assumed to be a generalization of the L2 norm for which only a weighted Monte Carlo estimate can be computed. The objective is to obtain an approximation of an unknown target function by minimizing the empirical norm. In the case of linear subspaces it is well-known that such least squares approximations can become inaccurate and unstable when the number of samples is too close to the number of parameters. We review this statement for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and we show sufficient conditions for the RIP to be satisfied with high probability. Several model classes are examined where analytical statements can be made about the RIP. Numerical experiments illustrate some of the obtained stability bounds.