Search Results

Now showing 1 - 10 of 16
Loading...
Thumbnail Image
Item

A hybrid FETI-DP method for non-smooth random partial differential equations

2018, Eigel, Martin, Gruhlke, Robert

A domain decomposition approach exploiting the localization of random parameters in highdimensional random PDEs is presented. For high efficiency, surrogate models in multi-element representations are computed locally when possible. This makes use of a stochastic Galerkin FETI-DP formulation of the underlying problem with localized representations of involved input random fields. The local parameter space associated to a subdomain is explored by a subdivision into regions where the parametric surrogate accuracy can be trusted and where instead Monte Carlo sampling has to be employed. A heuristic adaptive algorithm carries out a problemdependent hp refinement in a stochastic multi-element sense, enlarging the trusted surrogate region in local parametric space as far as possible. This results in an efficient global parameter to solution sampling scheme making use of local parametric smoothness exploration in the involved surrogate construction. Adequately structured problems for this scheme occur naturally when uncertainties are defined on sub-domains, e.g. in a multi-physics setting, or when the Karhunen-Loéve expansion of a random field can be localized. The efficiency of this hybrid technique is demonstrated with numerical benchmark problems illustrating the identification of trusted (possibly higher order) surrogate regions and non-trusted sampling regions.

Loading...
Thumbnail Image
Item

An adaptive stochastic Galerkin tensor train discretization for randomly perturbed domains

2018, Eigel, Martin, Marschall, Manuel, Multerer, Michael

A linear PDE problem for randomly perturbed domains is considered in an adaptive Galerkin framework. The perturbation of the domains boundary is described by a vector valued random field depending on a countable number of random variables in an affine way. The corresponding Karhunen-Loève expansion is approximated by the pivoted Cholesky decomposition based on a prescribed covariance function. The examined high-dimensional Galerkin system follows from the domain mapping approach, transferring the randomness from the domain to the diffusion coefficient and the forcing. In order to make this computationally feasible, the representation makes use of the modern tensor train format for the implicit compression of the problem. Moreover, an a posteriori error estimator is presented, which allows for the problem-dependent iterative refinement of all discretization parameters and the assessment of the achieved error reduction. The proposed approach is demonstrated in numerical benchmark problems.

Loading...
Thumbnail Image
Item

Low-rank Wasserstein polynomial chaos expansions in the framework of optimal transport

2022, Gruhlke, Robert, Eigel, Martin

A unsupervised learning approach for the computation of an explicit functional representation of a random vector Y is presented, which only relies on a finite set of samples with unknown distribution. Motivated by recent advances with computational optimal transport for estimating Wasserstein distances, we develop a new Wasserstein multi-element polynomial chaos expansion (WPCE). It relies on the minimization of a regularized empirical Wasserstein metric known as debiased Sinkhorn divergence. As a requirement for an efficient polynomial basis expansion, a suitable (minimal) stochastic coordinate system X has to be determined with the aim to identify ideally independent random variables. This approach generalizes representations through diffeomorphic transport maps to the case of non-continuous and non-injective model classes M with different input and output dimension, yielding the relation Y=M(X) in distribution. Moreover, since the used PCE grows exponentially in the number of random coordinates of X, we introduce an appropriate low-rank format given as stacks of tensor trains, which alleviates the curse of dimensionality, leading to only linear dependence on the input dimension. By the choice of the model class M and the smooth loss function, higher order optimization schemes become possible. It is shown that the relaxation to a discontinuous model class is necessary to explain multimodal distributions. Moreover, the proposed framework is applied to a numerical upscaling task, considering a computationally challenging microscopic random non-periodic composite material. This leads to tractable effective macroscopic random field in adopted stochastic coordinates.

Loading...
Thumbnail Image
Item

Variational Monte Carlo - Bridging concepts of machine learning and high dimensional partial differential equations

2018, Eigel, Martin, Trunschke, Philipp, Schneider, Reinhold, Wolf, Sebastian

A statistical learning approach for parametric PDEs related to Uncertainty Quantification is derived. The method is based on the minimization of an empirical risk on a selected model class and it is shown to be applicable to a broad range of problems. A general unified convergence analysis is derived, which takes into account the approximation and the statistical errors. By this, a combination of theoretical results from numerical analysis and statistics is obtained. Numerical experiments illustrate the performance of the method with the model class of hierarchical tensors.

Loading...
Thumbnail Image
Item

Dynamical low-rank approximations of solutions to the Hamilton--Jacobi--Bellman equation

2021, Eigel, Martin, Schneider, Reinhold, Sommer, David

We present a novel method to approximate optimal feedback laws for nonlinar optimal control basedon low-rank tensor train (TT) decompositions. The approach is based on the Dirac-Frenkel variationalprinciple with the modification that the optimisation uses an empirical risk. Compared to currentstate-of-the-art TT methods, our approach exhibits a greatly reduced computational burden whileachieving comparable results. A rigorous description of the numerical scheme and demonstrations ofits performance are provided.

Loading...
Thumbnail Image
Item

Efficient approximation of high-dimensional exponentials by tensor networks

2021, Eigel, Martin, Farchmin, Nando, Heidenreich, Sebastian, Trunschke, Philipp

In this work a general approach to compute a compressed representation of the exponential exp(h) of a high-dimensional function h is presented. Such exponential functions play an important role in several problems in Uncertainty Quantification, e.g. the approximation of log-normal random fields or the evaluation of Bayesian posterior measures. Usually, these high-dimensional objects are intractable numerically and can only be accessed pointwise in sampling methods. In contrast, the proposed method constructs a functional representation of the exponential by exploiting its nature as a solution of an ordinary differential equation. The application of a Petrov--Galerkin scheme to this equation provides a tensor train representation of the solution for which we derive an efficient and reliable a posteriori error estimator. Numerical experiments with a log-normal random field and a Bayesian likelihood illustrate the performance of the approach in comparison to other recent low-rank representations for the respective applications. Although the present work considers only a specific differential equation, the presented method can be applied in a more general setting. We show that the composition of a generic holonomic function and a high-dimensional function corresponds to a differential equation that can be used in our method. Moreover, the differential equation can be modified to adapt the norm in the a posteriori error estimates to the problem at hand.

Loading...
Thumbnail Image
Item

Comparison of monomorphic and polymorphic approaches for uncertainty quantification with experimental investigations

2019, Drieschner, Martin, Eigel, Martin, Gruhlke, Robert, Hömberg, Dietmar, Petryna, Yuri

Unavoidable uncertainties due to natural variability, inaccuracies, imperfections or lack of knowledge are always present in real world problems. To take them into account within a numerical simulation, the probability, possibility or fuzzy set theory as well as a combination of these are potentially usable for the description and quantification of uncertainties. In this work, different monomorphic and polymorphic uncertainty models are applied on linear elastic structures with non-periodic perforations in order to analyze the individual usefulness and expressiveness. The first principal stress is used as an indicator for structural failure which is evaluated and classified. In addition to classical sampling methods, a surrogate model based on artificial neural networks is presented. With regard to accuracy, efficiency and resulting numerical predictions, all methods are compared and assessed with respect to the added value. Real experiments of perforated plates under uniaxial tension are validated with the help of the different uncertainty models.

Loading...
Thumbnail Image
Item

Local surrogate responses in the Schwarz alternating method for elastic problems on random voided domains

2022, Drieschner, Martin, Gruhlke, Robert, Petryna, Yuri, Eigel, Martin, Hömberg, Dietmar

Imperfections and inaccuracies in real technical products often influence the mechanical behavior and the overall structural reliability. The prediction of real stress states and possibly resulting failure mechanisms is essential and a real challenge, e.g. in the design process. In this contribution, imperfections in elastic materials such as air voids in adhesive bonds between fiber-reinforced composites are investigated. They are modeled as arbitrarily shaped and positioned. The focus is on local displacement values as well as on associated stress concentrations caused by the imperfections. For this purpose, the resulting complex random one-scale finite element model is numerically solved by a new developed surrogate model using an overlapping domain decomposition scheme based on Schwarz alternating method. Here, the actual response of local subproblems associated with isolated material imperfections is determined by a single appropriate surrogate model, that allows for an accelerated propagation of randomness. The efficiency of the method is demonstrated for imperfections with elliptical and ellipsoidal shape in 2D and 3D and extended to arbitrarily shaped voids. For the latter one, a local surrogate model based on artificial neural networks (ANN) is constructed. Finally, a comparison to experimental results validates the numerical predictions for a real engineering problem.

Loading...
Thumbnail Image
Item

Adaptive non-intrusive reconstruction of solutions to high-dimensional parametric PDEs

2021, Eigel, Martin, Farchmin, Nando, Heidenreich, Sebastian, Trunschke, Philipp

Numerical methods for random parametric PDEs can greatly benefit from adaptive refinement schemes, in particular when functional approximations are computed as in stochastic Galerkin and stochastic collocations methods. This work is concerned with a non-intrusive generalization of the adaptive Galerkin FEM with residual based error estimation. It combines the non-intrusive character of a randomized least-squares method with the a posteriori error analysis of stochastic Galerkin methods. The proposed approach uses the Variational Monte Carlo method to obtain a quasi-optimal low-rank approximation of the Galerkin projection in a highly efficient hierarchical tensor format. We derive an adaptive refinement algorithm which is steered by a reliable error estimator. Opposite to stochastic Galerkin methods, the approach is easily applicable to a wide range of problems, enabling a fully automated adjustment of all discretization parameters. Benchmark examples with affine and (unbounded) lognormal coefficient fields illustrate the performance of the non-intrusive adaptive algorithm, showing best-in-class performance

Loading...
Thumbnail Image
Item

Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion

2019, Eigel, Martin, Gruhlke, Robert, Marschall, Manuel

A novel method for the accurate functional approximation of possibly highly concentrated probability densities is developed. It is based on the combination of several modern techniques such as transport maps and nonintrusive reconstructions of low-rank tensor representations. The central idea is to carry out computations for statistical quantities of interest such as moments with a convenient reference measure which is approximated by an numerical transport, leading to a perturbed prior. Subsequently, a coordinate transformation leads to a beneficial setting for the further function approximation. An efficient layer based transport construction is realized by using the Variational Monte Carlo (VMC) method. The convergence analysis covers all terms introduced by the different (deterministic and statistical) approximations in the Hellinger distance and the Kullback-Leibler divergence. Important applications are presented and in particular the context of Bayesian inverse problems is illuminated which is a central motivation for the developed approach. Several numerical examples illustrate the efficacy with densities of different complexity.