Search Results

Now showing 1 - 6 of 6
  • Item
    Patch-Wise Adaptive Weights Smoothing in R
    (Los Angeles, Calif. : UCLA, Dept. of Statistics, 2020) Polzehl, Jörg; Papafitsoros, Kostas; Tabelow, Karsten
    Image reconstruction from noisy data has a long history of methodological development and is based on a variety of ideas. In this paper we introduce a new method called patch-wise adaptive smoothing, that extends the propagation-separation approach by using comparisons of local patches of image intensities to define local adaptive weighting schemes for an improved balance of reduced variability and bias in the reconstruction result. We present the implementation of the new method in an R package aws and demonstrate its properties on a number of examples in comparison with other state-of-the art image reconstruction methods. © 2020, American Statistical Association. All rights reserved.
  • Item
    Variable Step Mollifiers and Applications
    (Berlin ; Heidelberg : Springer, 2020) Hintermüller, Michael; Papafitsoros, Kostas; Rautenberg, Carlos N.
    We consider a mollifying operator with variable step that, in contrast to the standard mollification, is able to preserve the boundary values of functions. We prove boundedness of the operator in all basic Lebesgue, Sobolev and BV spaces as well as corresponding approximation results. The results are then applied to extend recently developed theory concerning the density of convex intersections. © 2020, The Author(s).
  • Item
    Optimization with learning-informed differential equation constraints and its applications
    (Les Ulis : EDP Sciences, 2022) Dong, Guozhi; Hintermüller, Michael; Papafitsoros, Kostas
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided.
  • Item
    Dualization and automatic distributed parameter selection of total generalized variation via bilevel optimization
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Hintermüller, Michael; Papafitsoros, Kostas; Rautenberg, Carlos N.; Sun, Hongpeng
    Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters.
  • Item
    First-order conditions for the optimal control of learning-informed nonsmooth PDEs
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2022) Dong, Guozhi; Hintermüller, Michael; Papafitsoros, Kostas; Völkner, Kathrin
    In this paper we study the optimal control of a class of semilinear elliptic partial differential equations which have nonlinear constituents that are only accessible by data and are approximated by nonsmooth ReLU neural networks. The optimal control problem is studied in detail. In particular, the existence and uniqueness of the state equation are shown, and continuity as well as directional differentiability properties of the corresponding control-to-state map are established. Based on approximation capabilities of the pertinent networks, we address fundamental questions regarding approximating properties of the learning-informed control-to-state map and the solution of the corresponding optimal control problem. Finally, several stationarity conditions are derived based on different notions of generalized differentiability.
  • Item
    Optimization with learning-informed differential equation constraints and its applications
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2020) Dong, Guozhi; Hintermüller, Michael; Papafitsoros, Kostas
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided.