Search Results

Now showing 1 - 4 of 4
  • Item
    Numerical upscaling of parametric microstructures in a possibilistic uncertainty framework with tensor trains
    (Heidelberg : Springer, 2022) Eigel, Martin; Gruhlke, Robert; Moser, Dieter; Grasedyck, Lars
    A fuzzy arithmetic framework for the efficient possibilistic propagation of shape uncertainties based on a novel fuzzy edge detection method is introduced. The shape uncertainties stem from a blurred image that encodes the distribution of two phases in a composite material. The proposed framework employs computational homogenisation to upscale the shape uncertainty to a effective material with fuzzy material properties. For this, many samples of a linear elasticity problem have to be computed, which is significantly sped up by a highly accurate low-rank tensor surrogate. To ensure the continuity of the underlying mapping from shape parametrisation to the upscaled material behaviour, a diffeomorphism is constructed by generating an appropriate family of meshes via transformation of a reference mesh. The shape uncertainty is then propagated to measure the distance of the upscaled material to the isotropic and orthotropic material class. Finally, the fuzzy effective material is used to compute bounds for the average displacement of a non-homogenized material with uncertain star-shaped inclusion shapes.
  • Item
    Efficient approximation of high-dimensional exponentials by tensor networks
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Farchmin, Nando; Heidenreich, Sebastian; Trunschke, Philipp
    In this work a general approach to compute a compressed representation of the exponential exp(h) of a high-dimensional function h is presented. Such exponential functions play an important role in several problems in Uncertainty Quantification, e.g. the approximation of log-normal random fields or the evaluation of Bayesian posterior measures. Usually, these high-dimensional objects are intractable numerically and can only be accessed pointwise in sampling methods. In contrast, the proposed method constructs a functional representation of the exponential by exploiting its nature as a solution of an ordinary differential equation. The application of a Petrov--Galerkin scheme to this equation provides a tensor train representation of the solution for which we derive an efficient and reliable a posteriori error estimator. Numerical experiments with a log-normal random field and a Bayesian likelihood illustrate the performance of the approach in comparison to other recent low-rank representations for the respective applications. Although the present work considers only a specific differential equation, the presented method can be applied in a more general setting. We show that the composition of a generic holonomic function and a high-dimensional function corresponds to a differential equation that can be used in our method. Moreover, the differential equation can be modified to adapt the norm in the a posteriori error estimates to the problem at hand.
  • Item
    Adaptive non-intrusive reconstruction of solutions to high-dimensional parametric PDEs
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) Eigel, Martin; Farchmin, Nando; Heidenreich, Sebastian; Trunschke, Philipp
    Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin and stochastic collocations methods. This work is concerned with a non-intrusive generalization of the adaptive Galerkin FEM with residual based error estimation. It combines the non-intrusive character of a randomized least-squares method with the a posteriori error analysis of stochastic Galerkin methods. The proposed approach uses the Variational Monte Carlo method to obtain a quasi-optimal low-rank approximation of the Galerkin projection in a highly efficient hierarchical tensor format. We derive an adaptive refinement algorithm which is steered by a reliable error estimator. Opposite to stochastic Galerkin methods, the approach is easily applicable to a wide range of problems, enabling a fully automated adjustment of all discretization parameters. Benchmark examples with affine and (unbounded) lognormal coefficient fields illustrate the performance of the non-intrusive adaptive algorithm, showing best-in-class performance
  • Item
    Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion
    (Dordrecht [u.a.] : Springer Science + Business Media B.V, 2022) Eigel, Martin; Gruhlke, Robert; Marschall, Manuel
    This paper presents a novel method for the accurate functional approximation of possibly highly concentrated probability densities. It is based on the combination of several modern techniques such as transport maps and low-rank approximations via a nonintrusive tensor train reconstruction. The central idea is to carry out computations for statistical quantities of interest such as moments based on a convenient representation of a reference density for which accurate numerical methods can be employed. Since the transport from target to reference can usually not be determined exactly, one has to cope with a perturbed reference density due to a numerically approximated transport map. By the introduction of a layered approximation and appropriate coordinate transformations, the problem is split into a set of independent approximations in seperately chosen orthonormal basis functions, combining the notions h- and p-refinement (i.e. “mesh size” and polynomial degree). An efficient low-rank representation of the perturbed reference density is achieved via the Variational Monte Carlo method. This nonintrusive regression technique reconstructs the map in the tensor train format. An a priori convergence analysis with respect to the error terms introduced by the different (deterministic and statistical) approximations in the Hellinger distance and the Kullback–Leibler divergence is derived. Important applications are presented and in particular the context of Bayesian inverse problems is illuminated which is a main motivation for the developed approach. Several numerical examples illustrate the efficacy with densities of different complexity and degrees of perturbation of the transport to the reference density. The (superior) convergence is demonstrated in comparison to Monte Carlo and Markov Chain Monte Carlo methods.