Search Results

Now showing 1 - 10 of 17
  • Item
    Assessment of lidar depolarization uncertainty by means of a polarimetric lidar simulator
    (München : European Geopyhsical Union, 2016) Bravo-Aranda, Juan Antonio; Belegante, Livio; Freudenthaler, Volker; Alados-Arboledas, Lucas; Nicolae, Doina; Granados-Muñoz, María José; Guerrero-Rascado, Juan Luis; Amodeo, Aldo; D'Amico, Giusseppe; Engelmann, Ronny; Pappalardo, Gelsomina; Kokkalis, Panos; Mamouri, Rodanthy; Papayannis, Alex; Navas-Guzmán, Francisco; Olmo, Francisco José; Wandinger, Ulla; Amato, Francesco; Haeffelin, Martial
    Lidar depolarization measurements distinguish between spherical and non-spherical aerosol particles based on the change of the polarization state between the emitted and received signal. The particle shape information in combination with other aerosol optical properties allows the characterization of different aerosol types and the retrieval of aerosol particle microphysical properties. Regarding the microphysical inversions, the lidar depolarization technique is becoming a key method since particle shape information can be used by algorithms based on spheres and spheroids, optimizing the retrieval procedure. Thus, the identification of the depolarization error sources and the quantification of their effects are crucial. This work presents a new tool to assess the systematic error of the volume linear depolarization ratio (δ), combining the Stokes–Müller formalism and the complete sampling of the error space using the lidar model presented in Freudenthaler (2016a). This tool is applied to a synthetic lidar system and to several EARLINET lidars with depolarization capabilities at 355 or 532 nm. The lidar systems show relative errors of δ larger than 100 % for δ values around molecular linear depolarization ratios (∼ 0.004 and up to ∼  10 % for δ = 0.45). However, one system shows only relative errors of 25 and 0.22 % for δ = 0.004 and δ = 0.45, respectively, and gives an example of how a proper identification and reduction of the main error sources can drastically reduce the systematic errors of δ. In this regard, we provide some indications of how to reduce the systematic errors.
  • Item
    Surface matters: Limitations of CALIPSO V3 aerosol typing in coastal regions
    (München : European Geopyhsical Union, 2014) Kanitz, T.; Ansmann, A.; Foth, A.; Seifert, P.; Wandinger, U.; Engelmann, R.; Baars, H.; Althausen, D.; Casiccia, C.; Zamorano, F.
    In the CALIPSO data analysis, surface type (land/ocean) is used to augment the aerosol characterization. However, this surface-dependent aerosol typing prohibits a correct classification of marine aerosol over land that is advected from ocean to land. This might result in a systematic overestimation of the particle extinction coefficient and of the aerosol optical thickness (AOT) of up to a factor of 3.5 over land in coastal areas. We present a long-term comparison of CALIPSO and ground-based lidar observations of the aerosol conditions in the coastal environment of southern South America (Punta Arenas, Chile, 53° S), performed in December 2009–April 2010. Punta Arenas is almost entirely influenced by marine particles throughout the year, indicated by a rather low AOT of 0.02–0.04. However, we found an unexpectedly high fraction of continental aerosol in the aerosol types inferred by means of CALIOP observations and, correspondingly, too high values of particle extinction. Similar features of the CALIOP data analysis are presented for four other coastal areas around the world. Since CALIOP data serve as important input for global climate models, the influence of this systematic error was estimated by means of simplified radiative-transfer calculations.
  • Item
    A complete representation of uncertainties in layer-counted paleoclimatic archives
    (München : European Geopyhsical Union, 2017) Boers, Niklas; Goswami, Bedartha; Ghil, Michael
    Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records – such as ice cores, sediments, corals, or tree rings – as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5–52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
  • Item
    Comparison of storm damage functions and their performance
    (Göttingen : Copernicus GmbH, 2015) Prahl, B.F.; Rybski, D.; Burghoff, O.; Kropp, J.P.
  • Item
    Comparison of correlation analysis techniques for irregularly sampled time series
    (Göttingen : Copernicus GmbH, 2011) Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
    Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation) or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques. All methods have comparable root mean square errors (RMSEs) for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF) for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF) the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods. We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory) is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.
  • Item
    Finite element pressure stabilizations for incompressible flow problems
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2019) John, Volker; Knobloch, Petr; Wilbrandt, Ulrich
    Discretizations of incompressible flow problems with pairs of finite element spaces that do not satisfy a discrete inf-sup condition require a so-called pressure stabilization. This paper gives an overview and systematic assessment of stabilized methods, including the respective error analysis.
  • Item
    Error analysis of a SUPG-stabilized POD-ROM method for convection-diffusion-reaction equations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2021) John, Volker; Moreau, Baptiste; Novo, Julia
    A reduced order model (ROM) method based on proper orthogonal decomposition (POD) is analyzed for convection-diffusion-reaction equations. The streamline-upwind Petrov--Galerkin (SUPG) stabilization is used in the practically interesting case of dominant convection, both for the full order method (FOM) and the ROM simulations. The asymptotic choice of the stabilization parameter for the SUPG-ROM is done as proposed in the literature. This paper presents a finite element convergence analysis of the SUPG-ROM method for errors in different norms. The constants in the error bounds are uniform with respect to small diffusion coefficients. Numerical studies illustrate the performance of the SUPG-ROM method.
  • Item
    Error analysis of the SUPG finite element disretization of evolutionary convection-diffusion-reaction equations
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2010) John, Volker; Novo, Julia
    Conditions on the stabilization parameters are explored for different approaches in deriving error estimates for the SUPG finite element stabilization of time-dependent convection-diffusion-reaction equations that is combined with the backward Euler method. Standard energy arguments lead to estimates for stabilization parameters that depend on the length of the time step. The stabilization vanishes in the time-continuous limit. However, based on numerical experiences, this seems not to be the correct behavior. For this reason, the time-continuous case is analyzed under certain conditions on the coefficients of the equation and the finite element method. An error estimate with the standard order of convergence is derived for stabilization parameters of the same form that is optimal for the steady-state problem. Numerical studies support the analytical results.
  • Item
    Functional a posteriori error estimation for stationary reaction-convection-diffusion problems
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2014) Eigel, Martin; Samrowski, Tatiana
    A functional type a posteriori error estimator for the finite element discretisation of the stationary reaction-convection-diffusion equation is derived. In case of dominant convection, the solution for this class of problems typically exhibits boundary layers and shock-front like areas with steep gradients. This renders the accurate numerical solution very demanding and appropriate techniques for the adaptive resolution of regions with large approximation errors are crucial. Functional error estimators as derived here contain no mesh-dependent constants and provide guaranteed error bounds for any conforming approximation. To evaluate the error estimator, a minimisation problem is solved which does not require any Galerkin orthogonality or any specific properties of the employed approximation space. Based on a set of numerical examples, we assess the performance of the new estimator. It is observed that it exhibits a good efficiency also with convection-dominated problem settings.
  • Item
    Optimal and robust a posteriori error estimates in L∞(L2) for the approximation of Allen-Cahn equations past singularities
    (Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik, 2009) Bartels, Sören; Müller, Rüdiger
    Optimal a posteriori error estimates in L∞(L2) are derived for the finite element approximation of Allen-Cahn equations. The estimates depend on the inverse of a small parameter only in a low order polynomial and are valid past topological changes of the evolving interface. The error analysis employs an elliptic reconstruction of the approximate solution and applies to a large class of conforming, nonconforming, mixed, and discontinuous Galerkin methods. Numerical experiments illustrate the theoretical results.