Search Results

Now showing 1 - 5 of 5
  • Item
    Convergence bounds for empirical nonlinear least-squares
    (Les Ulis : EDP Sciences, 2022) Eigel, Martin; Schneider, Reinhold; Trunschke, Philipp
    We consider best approximation problems in a nonlinear subset ℳ of a Banach space of functions (𝒱,∥•∥). The norm is assumed to be a generalization of the L 2-norm for which only a weighted Monte Carlo estimate ∥•∥n can be computed. The objective is to obtain an approximation v ∈ ℳ of an unknown function u ∈ 𝒱 by minimizing the empirical norm ∥u − v∥n. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the specified nonlinear least squares setting. Several model classes are examined and the analytical statements about the RIP are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
  • Item
    Efficient approximation of high-dimensional exponentials by tensor networks
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Farchmin, Nando; Heidenreich, Sebastian; Trunschke, Philipp
    In this work a general approach to compute a compressed representation of the exponential exp(h) of a high-dimensional function h is presented. Such exponential functions play an important role in several problems in Uncertainty Quantification, e.g. the approximation of log-normal random fields or the evaluation of Bayesian posterior measures. Usually, these high-dimensional objects are intractable numerically and can only be accessed pointwise in sampling methods. In contrast, the proposed method constructs a functional representation of the exponential by exploiting its nature as a solution of an ordinary differential equation. The application of a Petrov--Galerkin scheme to this equation provides a tensor train representation of the solution for which we derive an efficient and reliable a posteriori error estimator. Numerical experiments with a log-normal random field and a Bayesian likelihood illustrate the performance of the approach in comparison to other recent low-rank representations for the respective applications. Although the present work considers only a specific differential equation, the presented method can be applied in a more general setting. We show that the composition of a generic holonomic function and a high-dimensional function corresponds to a differential equation that can be used in our method. Moreover, the differential equation can be modified to adapt the norm in the a posteriori error estimates to the problem at hand.
  • Item
    Adaptive non-intrusive reconstruction of solutions to high-dimensional parametric PDEs
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Farchmin, Nando; Heidenreich, Sebastian; Trunschke, Philipp
    Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin and stochastic collocations methods. This work is concerned with a non-intrusive generalization of the adaptive Galerkin FEM with residual based error estimation. It combines the non-intrusive character of a randomized least-squares method with the a posteriori error analysis of stochastic Galerkin methods. The proposed approach uses the Variational Monte Carlo method to obtain a quasi-optimal low-rank approximation of the Galerkin projection in a highly efficient hierarchical tensor format. We derive an adaptive refinement algorithm which is steered by a reliable error estimator. Opposite to stochastic Galerkin methods, the approach is easily applicable to a wide range of problems, enabling a fully automated adjustment of all discretization parameters. Benchmark examples with affine and (unbounded) lognormal coefficient fields illustrate the performance of the non-intrusive adaptive algorithm, showing best-in-class performance
  • Item
    Pricing high-dimensional Bermudan options with hierarchical tensor formats
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Bayer, Christian; Eigel, Martin; Sallandt, Leon; Trunschke, Philipp
    An efficient compression technique based on hierarchical tensors for popular option pricing methods is presented. It is shown that the ``curse of dimensionality" can be alleviated for the computation of Bermudan option prices with the Monte Carlo least-squares approach as well as the dual martingale method, both using high-dimensional tensorized polynomial expansions. This discretization allows for a simple and computationally cheap evaluation of conditional expectations. Complexity estimates are provided as well as a description of the optimization procedures in the tensor train format. Numerical experiments illustrate the favourable accuracy of the proposed methods. The dynamical programming method yields results comparable to recent Neural Network based methods.
  • Item
    Convergence bounds for empirical nonlinear least-squares
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Eigel, Martin; Trunschke, Philipp; Schneider, Reinhold
    We consider best approximation problems in a nonlinear subset of a Banach space of functions. The norm is assumed to be a generalization of the L2 norm for which only a weighted Monte Carlo estimate can be computed. The objective is to obtain an approximation of an unknown target function by minimizing the empirical norm. In the case of linear subspaces it is well-known that such least squares approximations can become inaccurate and unstable when the number of samples is too close to the number of parameters. We review this statement for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and we show sufficient conditions for the RIP to be satisfied with high probability. Several model classes are examined where analytical statements can be made about the RIP. Numerical experiments illustrate some of the obtained stability bounds.