Search Results

Now showing 1 - 8 of 8
  • Item
    Bayesian inversion with a hierarchical tensor representation
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2016) Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
    The statistical Bayesian approach is a natural setting to resolve the ill-posedness of inverse problems by assigning probability densities to the considered calibration parameters. Based on a parametric deterministic representation of the forward model, a sampling-free approach to Bayesian inversion with an explicit representation of the parameter densities is developed. The approximation of the involved randomness inevitably leads to several high dimensional expressions, which are often tackled with classical sampling methods such as MCMC. To speed up these methods, the use of a surrogate model is beneficial since it allows for faster evaluation with respect to calibration parameters. However, the inherently slow convergence can not be remedied by this. As an alternative, a complete functional treatment of the inverse problem is feasible as demonstrated in this work, with functional representations of the parametric forward solution as well as the probability densities of the calibration parameters, determined by Bayesian inversion. The proposed sampling-free approach is discussed in the context of hierarchical tensor representations, which are employed for the adaptive evaluation of a random PDE (the forward problem) in generalized chaos polynomials and the subsequent high-dimensional quadrature of the log-likelihood. This modern compression technique alleviates the curse of dimensionality by hierarchical subspace approximations of the involved low rank (solution) manifolds. All required computations can be carried out efficiently in the low-rank format. A priori convergence is examined, considering all approximations that occur in the method. Numerical experiments demonstrate the performance and verify the theoretical results.
  • Item
    Dynamical low-rank approximations of solutions to the Hamilton--Jacobi--Bellman equation
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Schneider, Reinhold; Sommer, David
    We present a novel method to approximate optimal feedback laws for nonlinar optimal control basedon low-rank tensor train (TT) decompositions. The approach is based on the Dirac-Frenkel variationalprinciple with the modification that the optimisation uses an empirical risk. Compared to currentstate-of-the-art TT methods, our approach exhibits a greatly reduced computational burden whileachieving comparable results. A rigorous description of the numerical scheme and demonstrations ofits performance are provided.
  • Item
    Non-intrusive tensor reconstruction for high dimensional random PDEs
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2017) Eigel, Martin; Neumann, Johannes; Schneider, Reinhold; Wolf, Sebastian
    This paper examines a completely non-intrusive, sample-based method for the computation of functional low-rank solutions of high dimensional parametric random PDEs which have become an area of intensive research in Uncertainty Quantification (UQ). In order to obtain a generalized polynomial chaos representation of the approximate stochastic solution, a novel black-box rank-adapted tensor reconstruction procedure is proposed. The performance of the described approach is illustrated with several numerical examples and compared to Monte Carlo sampling.
  • Item
    Convergence bounds for empirical nonlinear least-squares
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Eigel, Martin; Trunschke, Philipp; Schneider, Reinhold
    We consider best approximation problems in a nonlinear subset of a Banach space of functions. The norm is assumed to be a generalization of the L2 norm for which only a weighted Monte Carlo estimate can be computed. The objective is to obtain an approximation of an unknown target function by minimizing the empirical norm. In the case of linear subspaces it is well-known that such least squares approximations can become inaccurate and unstable when the number of samples is too close to the number of parameters. We review this statement for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and we show sufficient conditions for the RIP to be satisfied with high probability. Several model classes are examined where analytical statements can be made about the RIP. Numerical experiments illustrate some of the obtained stability bounds.
  • Item
    Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2018) Eigel, Martin; Trunschke, Philipp; Schneider, Reinhold; Wolf, Sebastian
    A statistical learning approach for parametric PDEs related to Uncertainty Quantification is derived. The method is based on the minimization of an empirical risk on a selected model class and it is shown to be applicable to a broad range of problems. A general unified convergence analysis is derived, which takes into account the approximation and the statistical errors. By this, a combination of theoretical results from numerical analysis and statistics is obtained. Numerical experiments illustrate the performance of the method with the model class of hierarchical tensors.
  • Item
    Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2018) Eigel, Martin; Marschall, Manuel; Pfeffer, Max; Schneider, Reinhold
    Stochastic Galerkin methods for non-affine coefficient representations are known to cause major difficulties from theoretical and numerical points of view. In this work, an adaptive Galerkin FE method for linear parametric PDEs with lognormal coefficients discretized in Hermite chaos polynomials is derived. It employs problem-adapted function spaces to ensure solvability of the variational formulation. The inherently high computational complexity of the parametric operator is made tractable by using hierarchical tensor representations. For this, a new tensor train format of the lognormal coefficient is derived and verified numerically. The central novelty is the derivation of a reliable residual-based a posteriori error estimator. This can be regarded as a unique feature of stochastic Galerkin methods. It allows for an adaptive algorithm to steer the refinements of the physical mesh and the anisotropic Wiener chaos polynomial degrees. For the evaluation of the error estimator to become feasible, a numerically efficient tensor format discretization is developed. Benchmark examples with unbounded lognormal coefficient fields illustrate the performance of the proposed Galerkin discretization and the fully adaptive algorithm.
  • Item
    Stochastic topology optimisation with hierarchical tensor reconstruction
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2016) Eigel, Martin; Neumann, Johannes; Schneider, Reinhold; Wolf, Sebastian
    A novel approach for risk-averse structural topology optimization under uncertainties is presented which takes into account random material properties and random forces. For the distribution of material, a phase field approach is employed which allows for arbitrary topological changes during optimization. The state equation is assumed to be a high-dimensional PDE parametrized in a (finite) set of random variables. For the examined case, linearized elasticity with a parametric elasticity tensor is used. Instead of an optimization with respect to the expectation of the involved random fields, for practical purposes it is important to design structures which are also robust in case of events that are not the most frequent. As a common risk-aware measure, the Conditional Value at Risk (CVaR) is used in the cost functional during the minimization procedure. Since the treatment of such high-dimensional problems is a numerically challenging task, a representation in the modern hierarchical tensor train format is proposed. In order to obtain this highly efficient representation of the solution of the random state equation, a tensor completion algorithm is employed which only required the pointwise evaluation of solution realizations. The new method is illustrated with numerical examples and compared with a classical Monte Carlo sampling approach.
  • Item
    Adaptive stochastic Galerkin FEM with hierarchical tensor representations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2015) Eigel, Martin; Pfeffer, Max; Schneider, Reinhold
    The solution of PDE with stochastic data commonly leads to very high-dimensional algebraic problems, e.g. when multiplicative noise is present. The Stochastic Galerkin FEM considered in this paper then suffers from the curse of dimensionality. This is directly related to the number of random variables required for an adequate representation of the random fields included in the PDE. With the presented new approach, we circumvent this major complexity obstacle by combining two highly efficient model reduction strategies, namely a modern low-rank tensor representation in the tensor train format of the problem and a refinement algorithm on the basis of a posteriori error estimates to adaptively adjust the different employed discretizations. The adaptive adjustment includes the refinement of the FE mesh based on a residual estimator, the problem-adapted stochastic discretization in anisotropic Legendre Wiener chaos and the successive increase of the tensor rank. Computable a posteriori error estimators are derived for all error terms emanating from the discretizations and the iterative solution with a preconditioned ALS scheme of the problem. Strikingly, it is possible to exploit the tensor structure of the problem to evaluate all error terms very efficiently. A set of benchmark problems illustrates the performance of the adaptive algorithm with higher-order FE. Moreover, the influence of the tensor rank on the approximation quality is investigated.