Search Results

Now showing 1 - 10 of 13
Loading...
Thumbnail Image
Item

Modeling forest plantations for carbon uptake with the LPJmL dynamic global vegetation model

2019, Braakhekke, Maarten C., Doelman, Jonathan C., Baas, Peter, Müller, Christoph, Schaphoff, Sibyll, Stehfest, Elke, van Vuuren, Detlef P.

We present an extension of the dynamic global vegetation model, Lund-Potsdam-Jena Managed Land (LPJmL), to simulate planted forests intended for carbon (C) sequestration. We implemented three functional types to simulate plantation trees in temperate, tropical, and boreal climates. The parameters of these functional types were optimized to fit target growth curves (TGCs). These curves represent the evolution of stemwood C over time in typical productive plantations and were derived by combining field observations and LPJmL estimates for equivalent natural forests. While the calibrated model underestimates stemwood C growth rates compared to the TGCs, it represents substantial improvement over using natural forests to represent afforestation. Based on a simulation experiment in which we compared global natural forest versus global forest plantation, we found that forest plantations allow for much larger C uptake rates on the timescale of 100 years, with a maximum difference of a factor of 1.9, around 54 years. In subsequent simulations for an ambitious but realistic scenario in which 650Mha (14% of global managed land, 4.5% of global land surface) are converted to forest over 85 years, we found that natural forests take up 37PgC versus 48PgC for forest plantations. Comparing these results to estimations of C sequestration required to achieve the 2°C climate target, we conclude that afforestation can offer a substantial contribution to climate mitigation. Full evaluation of afforestation as a climate change mitigation strategy requires an integrated assessment which considers all relevant aspects, including costs, biodiversity, and trade-offs with other land-use types. Our extended version of LPJmL can contribute to such an assessment by providing improved estimates of C uptake rates by forest plantations. © 2019 American Institute of Physics Inc.. All rights reserved.

Loading...
Thumbnail Image
Item

The effect of univariate bias adjustment on multivariate hazard estimates

2019, Zscheischler, Jakob, Fischer, Erich M., Lange, Stefan

Bias adjustment is often a necessity in estimating climate impacts because impact models usually rely on unbiased climate information, a requirement that climate model outputs rarely fulfil. Most currently used statistical bias-adjustment methods adjust each climate variable separately, even though impacts usually depend on multiple potentially dependent variables. Human heat stress, for instance, depends on temperature and relative humidity, two variables that are often strongly correlated. Whether univariate bias-adjustment methods effectively improve estimates of impacts that depend on multiple drivers is largely unknown, and the lack of long-term impact data prevents a direct comparison between model outputs and observations for many climate-related impacts. Here we use two hazard indicators, heat stress and a simple fire risk indicator, as proxies for more sophisticated impact models. We show that univariate bias-adjustment methods such as univariate quantile mapping often cannot effectively reduce biases in multivariate hazard estimates. In some cases, it even increases biases. These cases typically occur (i) when hazards depend equally strongly on more than one climatic driver, (ii) when models exhibit biases in the dependence structure of drivers and (iii) when univariate biases are relatively small. Using a perfect model approach, we further quantify the uncertainty in bias-adjusted hazard indicators due to internal variability and show how imperfect bias adjustment can amplify this uncertainty. Both issues can be addressed successfully with a statistical bias adjustment that corrects the multivariate dependence structure in addition to the marginal distributions of the climate drivers. Our results suggest that currently many modeled climate impacts are associated with uncertainties related to the choice of bias adjustment. We conclude that in cases where impacts depend on multiple dependent climate variables these uncertainties can be reduced using statistical bias-adjustment approaches that correct the variables' multivariate dependence structure. © 2019 Copernicus GmbH. All rights reserved.

Loading...
Thumbnail Image
Item

Large-scale electrical resistivity tomography in the Cheb Basin (Eger Rift) at an International Continental Drilling Program (ICDP) monitoring site to image fluid-related structures

2019, Nickschick, Tobias, Flechsig, Christina, Mrlina, Jan, Oppermann, Frank, Löbig, Felix, Günther, Thomas

The Cheb Basin, a region of ongoing swarm earthquake activity in the western Czech Republic, is characterized by intense carbon dioxide degassing along two known fault zones – the N–S-striking Počatky–Plesná fault zone (PPZ) and the NW–SE-striking Mariánské Lázně fault zone (MLF). The fluid pathways for the ascending CO2 of mantle origin are one of the subjects of the International Continental Scientific Drilling Program (ICDP) project “Drilling the Eger Rift” in which several geophysical surveys are currently being carried out in this area to image the topmost hundreds of meters to assess the structural situation, as existing boreholes are not sufficiently deep to characterize it. As electrical resistivity is a sensitive parameter to the presence of conductive rock fractions as liquid fluids, clay minerals, and also metallic components, a large-scale dipole–dipole experiment using a special type of electric resistivity tomography (ERT) was carried out in June 2017 in order to image fluid-relevant structures. We used permanently placed data loggers for voltage measurements in conjunction with moving high-power current sources to generate sufficiently strong signals that could be detected all along the 6.5 km long profile with 100 and 150 m dipole spacings. After extensive processing of time series for voltage and current using a selective stacking approach, the pseudo-section is inverted, which results in a resistivity model that allows for reliable interpretations depths of up than 1000 m. The subsurface resistivity image reveals the deposition and transition of the overlying Neogene Vildštejn and Cypris formations, but it also shows a very conductive basement of phyllites and granites that can be attributed to high salinity or rock alteration by these fluids in the tectonically stressed basement. Distinct, narrow pathways for CO2 ascent are not observed with this kind of setup, which hints at wide degassing structures over several kilometers within the crust instead. We also observed gravity and GPS data along this profile in order to constrain ERT results. A gravity anomaly of ca. −9 mGal marks the deepest part of the Cheb Basin where the ERT profile indicates a large accumulation of conductive rocks, indicating a very deep weathering or alteration of the phyllitic basement due to the ascent of magmatic fluids such as CO2. We propose a conceptual model in which certain lithologic layers act as caps for the ascending fluids based on stratigraphic records and our results from this experiment, providing a basis for future drillings in the area aimed at studying and monitoring fluids.

Loading...
Thumbnail Image
Item

Distinct element geomechanical modelling of the formation of sinkhole clusters within large-scale karstic depressions

2019, Al-Halbouni, Djamil, Holohan, Eoghan P., Taheri, Abbas, Watson, Robert A., Polom, Ulrich, Schöpfer, Martin P. J., Emam, Sacha, Dahm, Torsten

The 2-D distinct element method (DEM) code (PFC2D_V5) is used here to simulate the evolution of subsidence-related karst landforms, such as single and clustered sinkholes, and associated larger-scale depressions. Subsurface material in the DEM model is removed progressively to produce an array of cavities; this simulates a network of subsurface groundwater conduits growing by chemical/mechanical erosion. The growth of the cavity array is coupled mechanically to the gravitationally loaded surroundings, such that cavities can grow also in part by material failure at their margins, which in the limit can produce individual collapse sinkholes. Two end-member growth scenarios of the cavity array and their impact on surface subsidence were examined in the models: (1) cavity growth at the same depth level and growth rate; (2) cavity growth at progressively deepening levels with varying growth rates. These growth scenarios are characterised by differing stress patterns across the cavity array and its overburden, which are in turn an important factor for the formation of sinkholes and uvala-like depressions. For growth scenario (1), a stable compression arch is established around the entire cavity array, hindering sinkhole collapse into individual cavities and favouring block-wise, relatively even subsidence across the whole cavity array. In contrast, for growth scenario (2), the stress system is more heterogeneous, such that local stress concentrations exist around individual cavities, leading to stress interactions and local wall/overburden fractures. Consequently, sinkhole collapses occur in individual cavities, which results in uneven, differential subsidence within a larger-scale depression. Depending on material properties of the cavity-hosting material and the overburden, the larger-scale depression forms either by sinkhole coalescence or by widespread subsidence linked geometrically to the entire cavity array. The results from models with growth scenario (2) are in close agreement with surface morphological and subsurface geophysical observations from an evaporite karst area on the eastern shore of the Dead Sea.

Loading...
Thumbnail Image
Item

The economically optimal warming limit of the planet

2019, Ueckerd, Falko, Frieler, Katja, Lange, Stefan, Wenz, Leonie, Luderer, Gunnar, Levermann, Anders

Both climate-change damages and climate-change mitigation will incur economic costs. While the risk of severe damages increases with the level of global warming (Dell et al., 2014; IPCC, 2014b, 2018; Lenton et al., 2008), mitigating costs increase steeply with more stringent warming limits (IPCC, 2014a; Luderer et al., 2013; Rogelj et al., 2015). Here, we show that the global warming limit that minimizes this century's total economic costs of climate change lies between 1.9 and 2°C, if temperature changes continue to impact national economic growth rates as observed in the past and if instantaneous growth effects are neither compensated nor amplified by additional growth effects in the following years. The result is robust across a wide range of normative assumptions on the valuation of future welfare and inequality aversion. We combine estimates of climate-change impacts on economic growth for 186 countries (applying an empirical damage function from Burke et al., 2015) with mitigation costs derived from a state-of-the-art energy-economy-climate model with a wide range of highly resolved mitigation options (Kriegler et al., 2017; Luderer et al., 2013, 2015). Our purely economic assessment, even though it omits non-market damages, provides support for the international Paris Agreement on climate change. The political goal of limiting global warming to "well below 2 degrees" is thus also an economically optimal goal given above assumptions on adaptation and damage persistence. © 2019 Copernicus GmbH. All rights reserved.

Loading...
Thumbnail Image
Item

The effect of overshooting 1.5 °C global warming on the mass loss of the Greenland ice sheet

2018, Rückamp, Martin, Falk, Ulrike, Frieler, Katja, Lange, Stefan, Humbert, Angelika

Sea-level rise associated with changing climate is expected to pose a major challenge for societies. Based on the efforts of COP21 to limit global warming to 2.0 ∘C or even 1.5 ∘C by the end of the 21st century (Paris Agreement), we simulate the future contribution of the Greenland ice sheet (GrIS) to sea-level change under the low emission Representative Concentration Pathway (RCP) 2.6 scenario. The Ice Sheet System Model (ISSM) with higher-order approximation is used and initialized with a hybrid approach of spin-up and data assimilation. For three general circulation models (GCMs: HadGEM2-ES, IPSL-CM5A-LR, MIROC5) the projections are conducted up to 2300 with forcing fields for surface mass balance (SMB) and ice surface temperature (Ts) computed by the surface energy balance model of intermediate complexity (SEMIC). The projected sea-level rise ranges between 21–38 mm by 2100 and 36–85 mm by 2300. According to the three GCMs used, global warming will exceed 1.5 ∘C early in the 21st century. The RCP2.6 peak and decline scenario is therefore manually adjusted in another set of experiments to suppress the 1.5 ∘C overshooting effect. These scenarios show a sea-level contribution that is on average about 38 % and 31 % less by 2100 and 2300, respectively. For some experiments, the rate of mass loss in the 23rd century does not exclude a stable ice sheet in the future. This is due to a spatially integrated SMB that remains positive and reaches values similar to the present day in the latter half of the simulation period. Although the mean SMB is reduced in the warmer climate, a future steady-state ice sheet with lower surface elevation and hence volume might be possible. Our results indicate that uncertainties in the projections stem from the underlying GCM climate data used to calculate the surface mass balance. However, the RCP2.6 scenario will lead to significant changes in the GrIS, including elevation changes of up to 100 m. The sea-level contribution estimated in this study may serve as a lower bound for the RCP2.6 scenario, as the currently observed sea-level rise is not reached in any of the experiments; this is attributed to processes (e.g. ocean forcing) not yet represented by the model, but proven to play a major role in GrIS mass loss.

Loading...
Thumbnail Image
Item

Effects of finite source rupture on landslide triggering: the 2016 Mw 7.1 Kumamoto earthquake

2019, von Specht, Sebastian, Ozturk, Ugur, Veh, Georg, Cotton, Fabrice, Korup, Oliver

The propagation of a seismic rupture on a fault introduces spatial variations in the seismic wave field surrounding the fault. This directivity effect results in larger shaking amplitudes in the rupture propagation direction. Its seismic radiation pattern also causes amplitude variations between the strike-normal and strike-parallel components of horizontal ground motion. We investigated the landslide response to these effects during the 2016 Kumamoto earthquake (Mw 7.1) in central Kyushu (Japan). Although the distribution of some 1500 earthquake-triggered landslides as a function of rupture distance is consistent with the observed Arias intensity, the landslides were more concentrated to the northeast of the southwest–northeast striking rupture. We examined several landslide susceptibility factors: hillslope inclination, the median amplification factor (MAF) of ground shaking, lithology, land cover, and topographic wetness. None of these factors sufficiently explains the landslide distribution or orientation (aspect), although the landslide head scarps have an elevated hillslope inclination and MAF. We propose a new physics-based ground-motion model (GMM) that accounts for the seismic rupture effects, and we demonstrate that the low-frequency seismic radiation pattern is consistent with the overall landslide distribution. Its spatial pattern is influenced by the rupture directivity effect, whereas landslide aspect is influenced by amplitude variations between the fault-normal and fault-parallel motion at frequencies <2 Hz. This azimuth dependence implies that comparable landslide concentrations can occur at different distances from the rupture. This quantitative link between the prevalent landslide aspect and the low-frequency seismic radiation pattern can improve coseismic landslide hazard assessment.

Loading...
Thumbnail Image
Item

Freshwater resources under success and failure of the Paris climate agreement

2019, Heinke, Jens, Müller, Christoph, Lannerstad, Mats, Gerten, Dieter, Lucht, Wolfgang

Population growth will in many regions increase the pressure on water resources and likely increase the number of people affected by water scarcity. In parallel, global warming causes hydrological changes which will affect freshwater supply for human use in many regions. This study estimates the exposure of future population to severe hydrological changes relevant from a freshwater resource perspective at different levels of global mean temperature rise above pre-industrial level (ΔTglob). The analysis is complemented by an assessment of water scarcity that would occur without additional climate change due to population change alone; this is done to identify the population groups that are faced with particularly high adaptation challenges. The results are analysed in the context of success and failure of implementing the Paris Agreement to evaluate how climate mitigation can reduce the future number of people exposed to severe hydrological change. The results show that without climate mitigation efforts, in the year 2100 about 4.9 billion people in the SSP2 population scenario would more likely than not be exposed to severe hydrological change, and about 2.1 billion of them would be faced with particularly high adaptation challenges due to already prevailing water scarcity. Limiting warming to 2 °C by a successful implementation of the Paris Agreement would strongly reduce these numbers to 615 million and 290 million, respectively. At the regional scale, substantial water-related risks remain at 2 °C, with more than 12% of the population exposed to severe hydrological change and high adaptation challenges in Latin America and the Middle East and north Africa region. Constraining δTglob to 1.5 °C would limit this share to about 5% in these regions. ©2019 Author(s).

Loading...
Thumbnail Image
Item

A remote-control datalogger for large-scale resistivity surveys and robust processing of its signals using a software lock-in approach

2018, Oppermann, Frank, Günther, Thomas

We present a new versatile datalogger that can be used for a wide range of possible applications in geosciences. It is adjustable in signal strength and sampling frequency, battery saving and can remotely be controlled over a Global System for Mobile Communication (GSM) connection so that it saves running costs, particularly in monitoring experiments. The internet connection allows for checking functionality, controlling schedules and optimizing pre-amplification. We mainly use it for large-scale electrical resistivity tomography (ERT), where it independently registers voltage time series on three channels, while a square-wave current is injected. For the analysis of this time series we present a new approach that is based on the lock-in (LI) method, mainly known from electronic circuits. The method searches the working point (phase) using three different functions based on a mask signal, and determines the amplitude using a direct current (DC) correlation function. We use synthetic data with different types of noise to compare the new method with existing approaches, i.e. selective stacking and a modified fast Fourier transformation (FFT)-based approach that assumes a 1∕f noise characteristics. All methods give comparable results, but the LI is better than the well-established stacking method. The FFT approach can be even better but only if the noise strictly follows the assumed characteristics. If overshoots are present in the data, which is typical in the field, FFT performs worse even with good data, which is why we conclude that the new LI approach is the most robust solution. This is also proved by a field data set from a long 2-D ERT profile.

Loading...
Thumbnail Image
Item

A numerical sensitivity study of how permeability, porosity, geological structure, and hydraulic gradient control the lifetime of a geothermal reservoir

2019, Bauer, Johanna F., Krumbholz, Michael, Luijendijk, Elco, Tanner, David C.

Geothermal energy is an important and sustainable resource that has more potential than is currently utilized. Whether or not a deep geothermal resource can be exploited, mostly depends on, besides temperature, the utilizable reservoir volume over time, which in turn largely depends on petrophysical parameters. We show, using over 1000 (n=1027) 4-D finite-element models of a simple geothermal doublet, that the lifetime of a reservoir is a complex function of its geological parameters, their heterogeneity, and the background hydraulic gradient (BHG). In our models, we test the effects of porosity, permeability, and BHG in an isotropic medium. Furthermore, we simulate the effect of permeability contrast and anisotropy induced by layering, fractures, and a fault. We quantify the lifetime of the reservoir by measuring the time to thermal breakthrough, i.e. how many years pass before the temperature of the produced fluid falls below the 100 ∘C threshold. The results of our sensitivity study attest to the positive effect of high porosity; however, high permeability and BHG can combine to outperform the former. Particular configurations of all the parameters can cause either early thermal breakthrough or extreme longevity of the reservoir. For example, the presence of high-permeability fractures, e.g. in a fault damage zone, can provide initially high yields, but it channels fluid flow and therefore dramatically restricts the exploitable reservoir volume. We demonstrate that the magnitude and orientation of the BHG, provided permeability is sufficiently high, are the prime parameters that affect the lifetime of a reservoir. Our numerical experiments show also that BHGs (low and high) can be outperformed by comparatively small variations in permeability contrast (103) and fracture-induced permeability anisotropy (101) that thus strongly affect the performance of geothermal reservoirs.