Search Results

Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Item

Data Protection Impact Assessments in Practice: Experiences from Case Studies

2022, Friedewald, Michael, Schiering, Ina, Martin, Nicholas, Hallinan, Dara, Katsikas, Sokratis, Lambrinoudakis, Costas, Cuppens, Nora, Mylopoulos, John, Kalloniatis, Christos, Meng, Weizhi, Furnell, Steven, Pallas, Frank, Pohle, Jörg, Sasse, M. Angela, Abie, Habtamu, Ranise, Silvio, Verderame, Luca, Cambiaso, Enrico, Vidal, Jorge Maestre, Monge, Marco Antonio Sotelo

In the context of the project A Data Protection Impact Assessment (DPIA) Tool for Practical Use in Companies and Public Administration an operationalization for Data Protection Impact Assessments was developed based on the approach of Forum Privatheit. This operationalization was tested and refined during twelve tests with startups, small- and medium sized enterprises, corporations and public bodies. This paper presents the operationalization and summarizes the experience from the tests.

Loading...
Thumbnail Image
Item

The Concept of Identifiability in ML Models

2022, von Maltzan, Stephanie, Bastieri, Denis, Wills, Gary, Kacsuk, Péter, Chang, Victor

Recent research indicates that the machine learning process can be reversed by adversarial attacks. These attacks can be used to derive personal information from the training. The supposedly anonymising machine learning process represents a process of pseudonymisation and is, therefore, subject to technical and organisational measures. Consequently, the unexamined belief in anonymisation as a guarantor for privacy cannot be easily upheld. It is, therefore, crucial to measure privacy through the lens of adversarial attacks and precisely distinguish what is meant by personal data and non-personal data and above all determine whether ML models represent pseudonyms from the training data.