Search Results

Now showing 1 - 10 of 17
  • Item
    Assessment of lidar depolarization uncertainty by means of a polarimetric lidar simulator
    (München : European Geopyhsical Union, 2016) Bravo-Aranda, Juan Antonio; Belegante, Livio; Freudenthaler, Volker; Alados-Arboledas, Lucas; Nicolae, Doina; Granados-Muñoz, María José; Guerrero-Rascado, Juan Luis; Amodeo, Aldo; D'Amico, Giusseppe; Engelmann, Ronny; Pappalardo, Gelsomina; Kokkalis, Panos; Mamouri, Rodanthy; Papayannis, Alex; Navas-Guzmán, Francisco; Olmo, Francisco José; Wandinger, Ulla; Amato, Francesco; Haeffelin, Martial
    Lidar depolarization measurements distinguish between spherical and non-spherical aerosol particles based on the change of the polarization state between the emitted and received signal. The particle shape information in combination with other aerosol optical properties allows the characterization of different aerosol types and the retrieval of aerosol particle microphysical properties. Regarding the microphysical inversions, the lidar depolarization technique is becoming a key method since particle shape information can be used by algorithms based on spheres and spheroids, optimizing the retrieval procedure. Thus, the identification of the depolarization error sources and the quantification of their effects are crucial. This work presents a new tool to assess the systematic error of the volume linear depolarization ratio (δ), combining the Stokes–Müller formalism and the complete sampling of the error space using the lidar model presented in Freudenthaler (2016a). This tool is applied to a synthetic lidar system and to several EARLINET lidars with depolarization capabilities at 355 or 532 nm. The lidar systems show relative errors of δ larger than 100 % for δ values around molecular linear depolarization ratios (∼ 0.004 and up to ∼  10 % for δ = 0.45). However, one system shows only relative errors of 25 and 0.22 % for δ = 0.004 and δ = 0.45, respectively, and gives an example of how a proper identification and reduction of the main error sources can drastically reduce the systematic errors of δ. In this regard, we provide some indications of how to reduce the systematic errors.
  • Item
    Surface matters: Limitations of CALIPSO V3 aerosol typing in coastal regions
    (München : European Geopyhsical Union, 2014) Kanitz, T.; Ansmann, A.; Foth, A.; Seifert, P.; Wandinger, U.; Engelmann, R.; Baars, H.; Althausen, D.; Casiccia, C.; Zamorano, F.
    In the CALIPSO data analysis, surface type (land/ocean) is used to augment the aerosol characterization. However, this surface-dependent aerosol typing prohibits a correct classification of marine aerosol over land that is advected from ocean to land. This might result in a systematic overestimation of the particle extinction coefficient and of the aerosol optical thickness (AOT) of up to a factor of 3.5 over land in coastal areas. We present a long-term comparison of CALIPSO and ground-based lidar observations of the aerosol conditions in the coastal environment of southern South America (Punta Arenas, Chile, 53° S), performed in December 2009–April 2010. Punta Arenas is almost entirely influenced by marine particles throughout the year, indicated by a rather low AOT of 0.02–0.04. However, we found an unexpectedly high fraction of continental aerosol in the aerosol types inferred by means of CALIOP observations and, correspondingly, too high values of particle extinction. Similar features of the CALIOP data analysis are presented for four other coastal areas around the world. Since CALIOP data serve as important input for global climate models, the influence of this systematic error was estimated by means of simplified radiative-transfer calculations.
  • Item
    A complete representation of uncertainties in layer-counted paleoclimatic archives
    (München : European Geopyhsical Union, 2017) Boers, Niklas; Goswami, Bedartha; Ghil, Michael
    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records – such as ice cores, sediments, corals, or tree rings – as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5–52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
  • Item
    Comparison of storm damage functions and their performance
    (Göttingen : Copernicus GmbH, 2015) Prahl, B.F.; Rybski, D.; Burghoff, O.; Kropp, J.P.
  • Item
    Comparison of correlation analysis techniques for irregularly sampled time series
    (Göttingen : Copernicus GmbH, 2011) Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
    Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.
  • Item
    Optimal and robust a posteriori error estimates in L∞(L2) for the approximation of Allen-Cahn equations past singularities
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2009) Bartels, Sören; Müller, Rüdiger
    Optimal a posteriori error estimates in L∞(L2) are derived for the finite element approximation of Allen-Cahn equations. The estimates depend on the inverse of a small parameter only in a low order polynomial and are valid past topological changes of the evolving interface. The error analysis employs an elliptic reconstruction of the approximate solution and applies to a large class of conforming, nonconforming, mixed, and discontinuous Galerkin methods. Numerical experiments illustrate the theoretical results.
  • Item
    Error control for the approximation of Allen-Cahn and Cahn-Hilliard equations with a logarithmic potential
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2010) Bartels, Sören; Müller, Rüdiger
    A fully computable upper bound for the finite element approximation error of Allen-Cahn and Cahn-Hilliard equations with logarithmic potentials is derived. Numerical experiments show that for the sharp interface limit this bound is robust past topological changes. Modifications of the abstract results to derive quasi-optimal error estimates in different norms for lowest order finite element methods are discussed and lead to weaker conditions on the residuals under which the conditional error estimates hold.
  • Item
    A unified analysis of Algebraic Flux Correction schemes for convection-diffusion equations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2018) Barrenechea, Gabriel R.; John, Volker; Knobloch, Petr; Rankin, Richard
    Recent results on the numerical analysis of Algebraic Flux Correction (AFC) finite element schemes for scalar convection-diffusion equations are reviewed and presented in a unified way. A general form of the method is presented using a link between AFC schemes and nonlinear edge-based diffusion scheme. Then, specific versions of the method, this is, different definitions for the flux limiters, are reviewed and their main results stated. Numerical studies compare the different versions of the scheme.
  • Item
    Functional a posteriori error estimation for stationary reaction-convection-diffusion problems
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2014) Eigel, Martin; Samrowski, Tatiana
    A functional type a posteriori error estimator for the finite element discretisation of the stationary reaction-convection-diffusion equation is derived. In case of dominant convection, the solution for this class of problems typically exhibits boundary layers and shock-front like areas with steep gradients. This renders the accurate numerical solution very demanding and appropriate techniques for the adaptive resolution of regions with large approximation errors are crucial. Functional error estimators as derived here contain no mesh-dependent constants and provide guaranteed error bounds for any conforming approximation. To evaluate the error estimator, a minimisation problem is solved which does not require any Galerkin orthogonality or any specific properties of the employed approximation space. Based on a set of numerical examples, we assess the performance of the new estimator. It is observed that it exhibits a good efficiency also with convection-dominated problem settings.
  • Item
    Reliable averaging for the primal variable in the Courant FEM and hierarchical error estimators on red-refined meshes
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2016) Carstensen, Carsten; Eigel, Martin
    A hierarchical a posteriori error estimator for the first-order finite element method (FEM) on a red-refined triangular mesh is presented for the 2D Poisson model problem. Reliability and efficiency with some explicit constant is proved for triangulations with inner angles smaller than or equal to π/2 . The error estimator does not rely on any saturation assumption and is valid even in the pre-asymptotic regime on arbitrarily coarse meshes. The evaluation of the estimator is a simple post-processing of the piecewise linear FEM without any extra solve plus a higher-order approximation term. The results also allows the striking observation that arbitrary local averaging of the primal variable leads to a reliable and efficient error estimation. Several numerical experiments illustrate the performance of the proposed a posteriori error estimator for computational benchmarks.