The Concept of Identifiability in ML Models

Loading...
Thumbnail Image
Date
2022
Volume
Issue
Journal
Series Titel
Book Title
Publisher
Setúbal : SciTePress - Science and Technology Publications, Lda.
Abstract

Recent research indicates that the machine learning process can be reversed by adversarial attacks. These attacks can be used to derive personal information from the training. The supposedly anonymising machine learning process represents a process of pseudonymisation and is, therefore, subject to technical and organisational measures. Consequently, the unexamined belief in anonymisation as a guarantor for privacy cannot be easily upheld. It is, therefore, crucial to measure privacy through the lens of adversarial attacks and precisely distinguish what is meant by personal data and non-personal data and above all determine whether ML models represent pseudonyms from the training data.

Description
Keywords
Anonymisation, Pseudonymisation, ML Model, Adversarial Attacks, Privacy, Utility, Konferenzschrift
Citation
von Maltzan, S. (2022). The Concept of Identifiability in ML Models (D. Bastieri, G. Wills, P. Kacsuk, & V. Chang, eds.). Setúbal : SciTePress - Science and Technology Publications, Lda. https://doi.org//10.5220/0011081600003194
License
CC BY-NC-ND 4.0 Unported