Search Results

Now showing 1 - 8 of 8
  • Item
    3+2 + X : what is the most useful depolarization input for retrieving microphysical properties of non-spherical particles from lidar measurements using the spheroid model of Dubovik et al. (2006)?
    (Katlenburg-Lindau : Copernicus, 2019) Tesche, Matthias; Kolgotin, Alexei; Haarig, Moritz; Burton, Sharon P.; Ferrare, Richard A.; Hostetler, Chris A.; Müller, Detlef
    The typical multiwavelength aerosol lidar data set for inversion of optical to microphysical parameters is composed of three backscatter coefficients (β) at 355, 532, and 1064 nm and two extinction coefficients (α) at 355 and 532 nm. This data combination is referred to as a 3β C 2α or 3 + 2 data set. This set of data is sufficient for retrieving some important microphysical particle parameters if the particles have spherical shape. Here, we investigate the effect of including the particle linear depolarization ratio (δ) as a third input parameter for the inversion of lidar data. The inversion algorithm is generally not used if measurements show values of d that exceed 0.10 at 532 nm, i.e. in the presence of nonspherical particles such as desert dust, volcanic ash, and, under special circumstances, biomass-burning smoke. We use experimental data collected with instruments that are capable of measuring d at all three lidar wavelengths with an inversion routine that applies the spheroidal light-scattering model of Dubovik et al. (2006) with a fixed axis-ratio distribution to replicate scattering properties of non-spherical particles. The inversion gives the fraction of spheroids required to replicate the optical data as an additional output parameter. This is the first systematic test of the effect of using all theoretically possible combinations of d taken at 355, 532, and 1064 nm as input in the lidar data inversion. We find that depolarization information of at least one wavelength already provides useful information for the inversion of optical data that have been collected in the presence of non-spherical mineral dust particles. However, any choice of d will give lower values of the single-scattering albedo than the traditional 3 + 2 data set. We find that input data sets that include d355 give a spheroid fraction that closely resembles the dust ratio we obtain from using β532 and d532 in a methodology applied in aerosol-type separation. The use of d355 in data sets of two or three d? reduces the spheroid fraction that is retrieved when using d532 and d1064. Use of the latter two parameters without accounting for d355 generally leads to high spheroid fractions that we consider not trustworthy. The use of three d instead of two δ, including the constraint that one of these is measured at 355 nm does not provide any advantage over using 3 + 2 + d355 for the observations with varying contributions of mineral dust considered here. However, additional measurements at wavelengths different from 355 nm would be desirable for application to a wider range of aerosol scenarios that may include non-spherical smoke particles, which can have values of d355 that are indistinguishable from those found for mineral dust. We therefore conclude that - depending on measurement capability - the future standard input for inversion of lidar data taken in the presence of mineral dust particles and using the spheroid model of Dubovik et al. (2006) might be 3+2Cδ355 or 3 + 2 + δ355 + δ532. © 2019 The Author(s).
  • Item
    Target categorization of aerosol and clouds by continuous multiwavelength-polarization lidar measurements
    (Katlenburg-Lindau : Copernicus, 2017) Baars, Holger; Seifert, Patric; Engelmann, Ronny; Wandinger, Ulla
    Absolute calibrated signals at 532 and 1064 nm and the depolarization ratio from a multiwavelength lidar are used to categorize primary aerosol but also clouds in high temporal and spatial resolution. Automatically derived particle backscatter coefficient profiles in low temporal resolution (30 min) are applied to calibrate the lidar signals. From these calibrated lidar signals, new atmospheric parameters in temporally high resolution (quasi-particle-backscatter coefficients) are derived. By using thresholds obtained from multiyear, multisite EARLINET (European Aerosol Research Lidar Network) measurements, four aerosol classes (small; large, spherical; large, non-spherical; mixed, partly nonspherical) and several cloud classes (liquid, ice) are defined. Thus, particles are classified by their physical features (shape and size) instead of by source. The methodology is applied to 2 months of continuous observations (24 h a day, 7 days a week) with the multiwavelength-Raman-polarization lidar PollyXT during the High-Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE) in spring 2013. Cloudnet equipment was operated continuously directly next to the lidar and is used for comparison. By discussing three 24 h case studies, it is shown that the aerosol discrimination is very feasible and informative and gives a good complement to the Cloudnet target categorization. Performing the categorization for the 2-month data set of the entire HOPE campaign, almost 1 million pixel (5 min×30 m) could be analysed with the newly developed tool. We find that the majority of the aerosol trapped in the planetary boundary layer (PBL) was composed of small particles as expected for a heavily populated and industrialized area. Large, spherical aerosol was observed mostly at the top of the PBL and close to the identified cloud bases, indicating the importance of hygroscopic growth of the particles at high relative humidity. Interestingly, it is found that on several days non-spherical particles were dispersed from the ground into the atmosphere.
  • Item
    The Fifth International Workshop on Ice Nucleation phase 2 (FIN-02): Laboratory intercomparison of ice nucleation measurements
    (Katlenburg-Lindau : Copernicus, 2018) DeMott, Paul J.; Möhler, Ottmar; Cziczo, Daniel J.; Hiranuma, Naruki; Petters, Markus D.; Petters, Sarah S.; Belosi, Franco; Bingemer, Heinz G.; Brooks, Sarah D.; Budke, Carsten; Burkert-Kohn, Monika; Collier, Kristen N.; Danielczok, Anja; Eppers, Oliver; Felgitsch, Laura; Garimella, Sarvesh; Grothe, Hinrich; Herenz, Paul; Hill, Thomas C. J.; Höhler, Kristina; Kanji, Zamin A.; Kiselev, Alexei; Koop, Thomas; Kristensen, Thomas B.; Krüger, Konstantin; Kulkarni, Gourihar; Levin, Ezra J. T.; Murray, Benjamin J.; Nicosia, Alessia; O'Sullivan, Daniel; Peckhaus, Andreas; Polen, Michael J.; Price, Hannah C.; Reicher, Naama; Rothenberg, Daniel A.; Rudich, Yinon; Santachiara, Gianni; Schiebel, Thea; Schrod, Jann; Seifried, Teresa M.; Stratmann, Frank; Sullivan, Ryan C.; Suski, Kaitlyn J.; Szakáll, Miklós; Taylor, Hans P.; Ullrich, Romy; Vergara-Temprado, Jesus; Wagner, Robert; Whale, Thomas F.; Weber, Daniel; Welti, André; Wilson, Theodore W.; Wolf, Martin J.; Zenker, Jake
    The second phase of the Fifth International Ice Nucleation Workshop (FIN-02) involved the gathering of a large number of researchers at the Karlsruhe Institute of Technology's Aerosol Interactions and Dynamics of the Atmosphere (AIDA) facility to promote characterization and understanding of ice nucleation measurements made by a variety of methods used worldwide. Compared to the previous workshop in 2007, participation was doubled, reflecting a vibrant research area. Experimental methods involved sampling of aerosol particles by direct processing ice nucleation measuring systems from the same volume of air in separate experiments using different ice nucleating particle (INP) types, and collections of aerosol particle samples onto filters or into liquid for sharing amongst measurement techniques that post-process these samples. In this manner, any errors introduced by differences in generation methods when samples are shared across laboratories were mitigated. Furthermore, as much as possible, aerosol particle size distribution was controlled so that the size limitations of different methods were minimized. The results presented here use data from the workshop to assess the comparability of immersion freezing measurement methods activating INPs in bulk suspensions, methods that activate INPs in condensation and/or immersion freezing modes as single particles on a substrate, continuous flow diffusion chambers (CFDCs) directly sampling and processing particles well above water saturation to maximize immersion and subsequent freezing of aerosol particles, and expansion cloud chamber simulations in which liquid cloud droplets were first activated on aerosol particles prior to freezing. The AIDA expansion chamber measurements are expected to be the closest representation to INP activation in atmospheric cloud parcels in these comparisons, due to exposing particles freely to adiabatic cooling. The different particle types used as INPs included the minerals illite NX and potassium feldspar (K-feldspar), two natural soil dusts representative of arable sandy loam (Argentina) and highly erodible sandy dryland (Tunisia) soils, respectively, and a bacterial INP (Snomax®). Considered together, the agreement among post-processed immersion freezing measurements of the numbers and fractions of particles active at different temperatures following bulk collection of particles into liquid was excellent, with possible temperature uncertainties inferred to be a key factor in determining INP uncertainties. Collection onto filters for rinsing versus directly into liquid in impingers made little difference. For methods that activated collected single particles on a substrate at a controlled humidity at or above water saturation, agreement with immersion freezing methods was good in most cases, but was biased low in a few others for reasons that have not been resolved, but could relate to water vapor competition effects. Amongst CFDC-style instruments, various factors requiring (variable) higher supersaturations to achieve equivalent immersion freezing activation dominate the uncertainty between these measurements, and for comparison with bulk immersion freezing methods. When operated above water saturation to include assessment of immersion freezing, CFDC measurements often measured at or above the upper bound of immersion freezing device measurements, but often underestimated INP concentration in comparison to an immersion freezing method that first activates all particles into liquid droplets prior to cooling (the PIMCA-PINC device, or Portable Immersion Mode Cooling chAmber-Portable Ice Nucleation Chamber), and typically slightly underestimated INP number concentrations in comparison to cloud parcel expansions in the AIDA chamber; this can be largely mitigated when it is possible to raise the relative humidity to sufficiently high values in the CFDCs, although this is not always possible operationally. Correspondence of measurements of INPs among direct sampling and post-processing systems varied depending on the INP type. Agreement was best for Snomax® particles in the temperature regime colder than -10°C, where their ice nucleation activity is nearly maximized and changes very little with temperature. At temperatures warmer than -10°C, Snomax® INP measurements (all via freezing of suspensions) demonstrated discrepancies consistent with previous reports of the instability of its protein aggregates that appear to make it less suitable as a calibration INP at these temperatures. For Argentinian soil dust particles, there was excellent agreement across all measurement methods; measures ranged within 1 order of magnitude for INP number concentrations, active fractions and calculated active site densities over a 25 to 30°C range and 5 to 8 orders of corresponding magnitude change in number concentrations. This was also the case for all temperatures warmer than -25°C in Tunisian dust experiments. In contrast, discrepancies in measurements of INP concentrations or active site densities that exceeded 2 orders of magnitude across a broad range of temperature measurements found at temperatures warmer than -25°C in a previous study were replicated for illite NX. Discrepancies also exceeded 2 orders of magnitude at temperatures of -20 to -25°C for potassium feldspar (K-feldspar), but these coincided with the range of temperatures at which INP concentrations increase rapidly at approximately an order of magnitude per 2°C cooling for K-feldspar. These few discrepancies did not outweigh the overall positive outcomes of the workshop activity, nor the future utility of this data set or future similar efforts for resolving remaining measurement issues. Measurements of the same materials were repeatable over the time of the workshop and demonstrated strong consistency with prior studies, as reflected by agreement of data broadly with parameterizations of different specific or general (e.g., soil dust) aerosol types. The divergent measurements of the INP activity of illite NX by direct versus post-processing methods were not repeated for other particle types, and the Snomax° data demonstrated that, at least for a biological INP type, there is no expected measurement bias between bulk collection and direct immediately processed freezing methods to as warm as -10°C. Since particle size ranges were limited for this workshop, it can be expected that for atmospheric populations of INPs, measurement discrepancies will appear due to the different capabilities of methods for sampling the full aerosol size distribution, or due to limitations on achieving sufficient water supersaturations to fully capture immersion freezing in direct processing instruments. Overall, this workshop presents an improved picture of present capabilities for measuring INPs than in past workshops, and provides direction toward addressing remaining measurement issues.
  • Item
    Improving the LPJmL4-SPITFIRE vegetation–fire model for South America using satellite data
    (Katlenburg-Lindau : Copernicus, 2019) Drüke, Markus; Forkel, Matthias; von Bloh, Werner; Sakschewski, Boris; Cardoso, Manoel; Bustamante, Mercedes; Kurths, Jürgen; Thonicke, Kirsten
    Vegetation fires influence global vegetation distribution, ecosystem functioning, and global carbon cycling. Specifically in South America, changes in fire occurrence together with land-use change accelerate ecosystem fragmentation and increase the vulnerability of tropical forests and savannas to climate change. Dynamic global vegetation models (DGVMs) are valuable tools to estimate the effects of fire on ecosystem functioning and carbon cycling under future climate changes. However, most fire-enabled DGVMs have problems in capturing the magnitude, spatial patterns, and temporal dynamics of burned area as observed by satellites. As fire is controlled by the interplay of weather conditions, vegetation properties, and human activities, fire modules in DGVMs can be improved in various aspects. In this study we focus on improving the controls of climate and hence fuel moisture content on fire danger in the LPJmL4-SPITFIRE DGVM in South America, especially for the Brazilian fire-prone biomes of Caatinga and Cerrado. We therefore test two alternative model formulations (standard Nesterov Index and a newly implemented water vapor pressure deficit) for climate effects on fire danger within a formal model–data integration setup where we estimate model parameters against satellite datasets of burned area (GFED4) and aboveground biomass of trees. Our results show that the optimized model improves the representation of spatial patterns and the seasonal to interannual dynamics of burned area especially in the Cerrado and Caatinga regions. In addition, the model improves the simulation of aboveground biomass and the spatial distribution of plant functional types (PFTs). We obtained the best results by using the water vapor pressure deficit (VPD) for the calculation of fire danger. The VPD includes, in comparison to the Nesterov Index, a representation of the air humidity and the vegetation density. This work shows the successful application of a systematic model–data integration setup, as well as the integration of a new fire danger formulation, in order to optimize a process-based fire-enabled DGVM. It further highlights the potential of this approach to achieve a new level of accuracy in comprehensive global fire modeling and prediction.
  • Item
    Global emissions pathways under different socioeconomic scenarios for use in CMIP6: a dataset of harmonized emissions trajectories through the end of the century
    (Katlenburg-Lindau : Copernicus, 2019) Gidden, Matthew J.; Riahi, Keywan; Smith, Steven J.; Fujimori, Shinichiro; Luderer, Gunnar; Kriegler, Elmar; van Vuuren, Detlef P.; van den Berg, Maarten; Feng, Leyang; Klein, David; Calvin, Katherine; Doelman, Jonathan C.; Frank, Stefan; Fricko, Oliver; Harmsen, Mathijs; Hasegawa, Tomoko; Havlik, Petr; Hilaire, Jérôme; Hoesly, Rachel; Horing, Jill; Popp, Alexander; Stehfest, Elke; Takahashi, Kiyoshi
    We present a suite of nine scenarios of future emissions trajectories of anthropogenic sources, a key deliverable of the ScenarioMIP experiment within CMIP6. Integrated assessment model results for 14 different emissions species and 13 emissions sectors are provided for each scenario with consistent transitions from the historical data used in CMIP6 to future trajectories using automated harmonization before being downscaled to provide higher emissions source spatial detail. We find that the scenarios span a wide range of end-of-century radiative forcing values, thus making this set of scenarios ideal for exploring a variety of warming pathways. The set of scenarios is bounded on the low end by a 1.9 W m−2 scenario, ideal for analyzing a world with end-of-century temperatures well below 2 ∘C, and on the high end by a 8.5 W m−2 scenario, resulting in an increase in warming of nearly 5 ∘C over pre-industrial levels. Between these two extremes, scenarios are provided such that differences between forcing outcomes provide statistically significant regional temperature outcomes to maximize their usefulness for downstream experiments within CMIP6. A wide range of scenario
  • Item
    NDCmitiQ v1.0.0: a tool to quantify and analyse greenhouse gas mitigation targets
    (Katlenburg-Lindau : Copernicus, 2021-9-14) Günther, Annika; Gütschow, Johannes; Jeffery, Mairi Louise
    Parties to the Paris Agreement (PA, 2015) outline their planned contributions towards achieving the PA temperature goal to “hold […] the increase in the global average temperature to well below 2 ∘C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 ∘C” (Article 2.1.a, PA) in their nationally determined contributions (NDCs). Most NDCs include targets to mitigate national greenhouse gas (GHG) emissions, which need quantifications to assess i.a. whether the current NDCs collectively put us on track to reach the PA temperature goals or the gap in ambition to do so. We implemented the new open-source tool “NDCmitiQ” to quantify GHG mitigation targets defined in the NDCs for all countries with quantifiable targets on a disaggregated level and to create corresponding national and global emissions pathways. In light of the 5-year update cycle of NDCs and the global stocktake, the quantification of NDCs is an ongoing task for which NDCmitiQ can be used, as calculations can easily be updated upon submission of new NDCs. In this paper, we describe the methodologies behind NDCmitiQ and quantification challenges we encountered by addressing a wide range of aspects, including target types and the input data from within NDCs; external time series of national emissions, population, and GDP; uniform approach vs. country specifics; share of national emissions covered by NDCs; how to deal with the Land Use, Land-Use Change and Forestry (LULUCF) component and the conditionality of pledges; and establishing pathways from single-year targets. For use in NDCmitiQ, we furthermore construct an emissions data set from the baseline emissions provided in the NDCs. Example use cases show how the tool can help to analyse targets on a national, regional, or global scale and to quantify uncertainties caused by a lack of clarity in the NDCs. Results confirm that the conditionality of targets and assumptions about economic growth dominate uncertainty in mitigated emissions on a global scale, which are estimated as 48.9–56.1 Gt CO2 eq. AR4 for 2030 (10th/90th percentiles, median: 51.8 Gt CO2 eq. AR4; excluding LULUCF and bunker fuels; submissions until 17 April 2020 and excluding the USA). We estimate that 77 % of global 2017 emissions were emitted from sectors and gases covered by these NDCs. Addressing all updated NDCs submitted by 31 December 2020 results in an estimated 45.6–54.1 Gt CO2 eq. AR4 (median: 49.6 Gt CO2 eq. AR4, now including the USA again) and increased coverage.
  • Item
    Tobac 1.2: Towards a flexible framework for tracking and analysis of clouds in diverse datasets
    (Katlenburg-Lindau : Copernicus, 2019) Heikenfeld, Max; Marinescu, Peter J.; Christensen, Matthew; Watson-Parris, Duncan; Senf, Fabian; van den Heever, Susan C.; Stier, Philip
    We introduce tobac (Tracking and Object-Based Analysis of Clouds), a newly developed framework for tracking and analysing individual clouds in different types of datasets, such as cloud-resolving model simulations and geostationary satellite retrievals. The software has been designed to be used flexibly with any two-or three-dimensional timevarying input. The application of high-level data formats, such as Iris cubes or xarray arrays, for input and output allows for convenient use of metadata in the tracking analysis and visualisation. Comprehensive analysis routines are provided to derive properties like cloud lifetimes or statistics of cloud properties along with tools to visualise the results in a convenient way. The application of tobac is presented in two examples. We first track and analyse scattered deep convective cells based on maximum vertical velocity and the threedimensional condensate mixing ratio field in cloud-resolving model simulations. We also investigate the performance of the tracking algorithm for different choices of time resolution of the model output. In the second application, we show how the framework can be used to effectively combine information from two different types of datasets by simultaneously tracking convective clouds in model simulations and in geostationary satellite images based on outgoing longwave radiation. The tobac framework provides a flexible new way to include the evolution of the characteristics of individual clouds in a range of important analyses like model intercomparison studies or model assessment based on observational data. © 2019 Author(s).
  • Item
    LandInG 1.0: a toolbox to derive input datasets for terrestrial ecosystem modelling at variable resolutions from heterogeneous sources
    (Katlenburg-Lindau : Copernicus, 2023) Ostberg, Sebastian; Müller, Christoph; Heinke, Jens; Schaphoff, Sibyll
    We present the Land Input Generator (LandInG) version 1.0, a new toolbox for generating input datasets for terrestrial ecosystem models (TEMs) from diverse and partially conflicting data sources. While LandInG 1.0 is applicable to process data for any TEM, it is developed specifically for the open-source dynamic global vegetation, hydrology, and crop growth model LPJmL (Lund-Potsdam-Jena with managed Land). The toolbox documents the sources and processing of data to model inputs and allows for easy changes to the spatial resolution. It is designed to make inconsistencies between different sources of data transparent so that users can make their own decisions on how to resolve these should they not be content with the default assumptions made here. As an example, we use the toolbox to create input datasets at 5 and 30 arcmin spatial resolution covering land, country, and region masks, soil, river networks, freshwater reservoirs, irrigation water distribution networks, crop-specific annual land use, fertilizer, and manure application. We focus on the toolbox describing the data processing rather than only publishing the datasets as users may want to make different choices for reconciling inconsistencies, aggregation, spatial extent, or similar. Also, new data sources or new versions of existing data become available continuously, and the toolbox approach allows for incorporating new data to stay up to date.