Search Results

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Item

Unintentional unfairness when applying new greenhouse gas emissions metrics at country level

2019, Rogelj, Joeri, Schleussner, Carl-Friedrich

The 2015 Paris Agreement sets out that rapid reductions in greenhouse gas (GHG) emissions are needed to keep global warming to safe levels. A new approach (known as GWP*) has been suggested to compare contributions of long- and short-lived GHGs, providing a close link between cumulative CO2-equivalent emissions and total warming. However, comparison factors for non-CO2 GHGs under the GWP* metric depend on past emissions, and hence raise questions of equity and fairness when applied at any but the global level. The use of GWP* would put most developing countries at a disadvantage compared to developed countries, because when using GWP* countries with high historical emissions of short-lived GHGs are exempted from accounting for avoidable future warming that is caused by sustaining these emissions. We show that when various established equity or fairness criteria are applied to GWP* (defined here as eGWP*), perceived national non-CO2 emissions vary by more than an order of magnitude, particularly in countries with high methane emissions like New Zealand. We show that national emission estimates that use GWP* are very sensitive to arbitrary choices made by countries and therewith facilitate the creation of loopholes when CO2-equivalent emissions based on the GWP* concept are traded between countries that use different approaches. In light of such equity-dependent accounting differences, GHG metrics like GWP* should only be used at the global level. A common, transparent and equity-neutral accounting metric is vital for the Paris Agreement's effectiveness and its environmental integrity.

Loading...
Thumbnail Image
Item

Bias in data-driven artificial intelligence systems - An introductory survey

2020, Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, Maria-Esther, Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis, T., Staab, S.

Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impact on individuals and society. Their decisions might affect everyone, everywhere, and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame. In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Commercial, Legal, and Ethical Issues > Ethical Considerations Commercial, Legal, and Ethical Issues > Legal Issues.