Die Zukunft der Datensicherheit: Anonymisierungstechnologien im Fokus

Erleben Sie in am 28. und 29. Oktober in Berlin den Kongress AnoSiDat des bundesweiten Forschungsnetzwerks Anonymisierung, der sich der Anonymisierungsforschung für eine sichere Datennutzung widmet.
Unter dem Dach der beeindruckenden Axica Berlin erwarten Sie Erkenntnisse, Innovationen und interdisziplinäre Zusammenarbeit.

Mehr über das Netzwerk erfahren

Registrierung

28. und 29. Oktober

AXICA Kongress- und Tagungszentrum
Pariser Platz 3 10117 Berlin

Zur Registrierung

10:00 Uhr

Grußwort aus dem Bundesministerium für Forschung, Technologie und Raumfahrt

Matthias Hauer, MdB

Grußwort des Parlamentarischen Staatssekretärs beim Bundesministerium für Forschung, Technologie und Raumfahrt

10:30 – 12:00 Uhr

Paneldiskussion

Datennutzung vs. Datenschutz?

Wie gestalten wir Datennutzungsanforderungen aus der Politik (Data Act, ePA & GDNG, EHDS, Koalitionsvertrag) mit regulatorischen Datenschutzanforderungen und den Datenschutzerwartungen der Bürger:innen?

Daniel Behrendt

Bundesministerium für Forschung, Technologie und Raumfahrt

Dr. Constanze Kurz

Sprecherin CCC

Prof. Dr. Esfandiar Mohammadi

Forschungsnetzwerk Anonymisierung

Thomas Köllmer

Fraunhofer IDMT

Keynotes aus dem Netzwerk

9:10 - 10:00 Uhr

Anonymisierung: Eine methodische Herausforderung oder ein administratives Problem?

Keynote von Prof. Dr. Markus Zwick, Statistisches Bundesamt

13:30 - 15:30 Uhr

Das Forschungsnetzwerk stellt sich vor

Beiträge aus den Clustern und Projekten

Mehr über das Netzwerk

16:00 – 17:30 Uhr

Paneldiskussion

Muss eine großflächige Nutzung von Gesundheitsdaten unsicher sein?

Wie können wir wissenschaftliche Studien auf ePA / EHDS Daten öffentlich nutzbar machen ohne regulatorischen Datenschutzanforderungen und Datenschutzerwartungen der Bürger:innen zu verletzen?

Dr. Christoph Kollwitz

Chief Product Officer @DOCYET

Prof. Dr. Hans Hermann Dirksen

Liebenstein Law

Prof. Dr. Thorsten Strufe

KIT, Projekt SynthiClick

Ab 17:30 Uhr

Netzwerken, Get-Together und Unterhaltung

Finger-Food und Getränke, Posteraustellung

29. Oktober

Der zweite Tag wird auf Englisch stattfinden. The second day will be held in english.

9:10 - 10:10 Uhr

Deployable Differential Privacy

Keynote by Shlomi Hod, PhD

10:40 – 12:30 Uhr

Contributions from the Network

Keynotes from Members of the Forschungsnetzwerk Anonymisiserung

Assessing deanonymisation risk in unstructured text
Stephanie Evert and the AnGer project team
When publishing anonymised text documents – e.g. court decisions on openjur.de – it is crucial to have a reliable assessment of their deanonymisation risk. The anonymisation quality of publicly accessible court decisions varies greatly (depending on their source) and is often insufficient to guarantee the anonymity of all parties involved. Using the automatic anonymisation algorithms developed in our research project AnGer, we are able to describe and quantify weaknesses in current anonymisation practice. In the vast majority of cases, only direct identifiers have been masked. We investigate the additional risk posed by pseudo-identifiers empirically through systematic re-identification experiments.
Science for Impact: How can complex topics be made accessible to the public? Using the example of knowledge transfer in the ANYMOS project
Marlin Dürrschnabel, Marie Simon
How can topics such as data protection, anonymization, and AI be brought out of the research arena and into the public sphere in a way that encourages people to listen, participate, and engage in discussion?
In the ANYMOS project, we tested new approaches to knowledge transfer at KIT’s TRIANGEL Transfer | Kultur | Raum: With formats such as “KI:NO Sommer” (cinema & lecture), Mobility Cafés, Knowledge Weeks, and “STULLE – wissenschaftlich belegt” (scientifically proven), research was translated into socially relevant dialogues – accessible, interactive, and entertaining.
Our talk shows how infotainment, participation, and a deliberate “non-research” role helped us act as mediators between science, business, and society. We share five key learnings from three years of project work, reflect on challenges, and show why transfer isn’t a one-way street.
The presentation is aimed at anyone interested in how interdisciplinary knowledge transfer can succeed beyond traditional dissemination.
Privacy-Preserving Analysis of Mobility Data
Gabriele Gührung, Dominik Schoop
Mobility data in the form of individuals’ trajectories is necessary for the optimal planning and operation of efficient and economic mobility infrastructure and services. However, detailed trajectories are inherently person-related and can reveal a large amount of information about re-identified individuals. Consequently, detailed trajectories are not collected in the first place, or the collected data poses a privacy risk to individuals. Privacy-preserving analysis of trajectories must ensure that the privacy of the data cannot be breached, even by a data processor (input privacy), and that any publicised analysis results do not contain personal information (output privacy). We present an approach developed in the AnoMoB project that uses homomorphic encryption and statistical methods to ensure input privacy and (local-)differential output privacy for trajectories. Encrypted trajectories are provided to a data processor who generates synthesised trajectories that have statistically similar properties to the original trajectories, while providing ɛ-differential privacy.
Lessons Learned from PET Adoption in the Public Sector
Jonathan Heiß (SINE), Julia Schöpp (Polyteia)
Privacy-enhancing Technologies (PETs) offer significant potential for data sovereignty and privacy-preserving collaboration in the public sector, breaking data silos, reducing process costs, and enabling novel applications for value creation. Yet their adoption in administrative processes remains limited despite increasing technological maturity. The ATLAS project addresses this gap, taking both technological and organizational perspectives to bring PET-enabled data collaboration into practice, such as secure Multi-Party Computation, differential privacy, and oblivious pseudonymization. This talk presents key lessons learned from our ongoing PET adoption initiatives in the public sector based on extensive interviews and workshops conducted throughout the ATLAS project and beyond. We identify key barriers that hinder successful PET adoption in public administration, including the difficulty of communicating complex technologies and legal uncertainties such as to what extent state-of-the-art technologies achieve anonymization or pseudonymization according to the GDPR. Based on these insights, we propose potential strategies and avenues for improving collaboration between PET developers and public sector stakeholders, aiming to support more effective and practical pathways toward implementation.
How Anonymizations Fail
Thorsten Strufe (KIT)
Currently proposed anonymization methods have a number of weaknesses when used in practice and in fact often fail to deliver promised data protection guarantees. Based on the legal definition of personal data and its relevance in different application contexts, we show the risks that can arise from seemingly harmless data sets. In addition to classic identifiers such as names or DNA, we show the role of so-called quasi-identifiers – such as movement data, browser fingerprints, gait and other biometrics, or technical pseudonyms – which are often underestimated, but in combination allow a high degree of re-identifiability. The central component of the lecture is the systematic analysis of typical attack scenarios on supposedly anonymized data: from trivial breaks such as hash reconstructions to semantically complex recombinations and modern methods of identification using AI. Using our own research results and empirical studies, we show how easily many supposedly anonymized data sets – especially those with biometric or sensitive content – can be traced back to individuals. It becomes clear that many published studies have methodological flaws, in particular due to incorrect attacker models or inadequate evaluation procedures. The central component of the lecture is the systematic analysis of typical attack scenarios on supposedly anonymized data: from trivial breaks such as hash reconstructions to semantically complex recombinations and modern methods of identification using AI. Using our own research results and empirical studies, we show how easily many supposedly anonymized data sets – especially those with biometric or sensitive content – can be traced back to individuals. It becomes clear that many published studies have methodological flaws, in particular due to incorrect attacker models or inadequate evaluation procedures.
Using anonymized traffic routes to optimize city traffic
IIP - Intelligente Nutzung verschiedener Verkehrsmittel
The increasing prevalence of motorized private transport in medium and large cities has led to significant road congestion. To address this issue, the IIP project utilizes mobility data collected from heterogeneous sensor sources (e.g., infrastructure sensors, mobile devices), which are processed in anonymized form within the Urban Data Platform (UDP) of the City of Osnabrück. As part of a crowdsensing approach, a mobile app records commuting trajectories and applies anonymization mechanisms prior to data transmission. A modular combination of techniques, such as k-anonymity and differential privacy, ensures the privacy of commuting data and protects individual users. The aggregated data feeds into a digital traffic twin that predicts traffic conditions up to three days in advance. Based on these forecasts, commuters can adapt their travel behavior by shifting departure times, working from home, or switching to public transportation. In addition, the approach supports improved public transport planning and enables more efficient use of road infrastructure, for example by optimized traffic light control. The system is currently being evaluated in a large-scale field study involving several hundred participants.
Demonstration of the AVATAR platform technology
Florian Rasche, Juliane Dettling-Papargyris (Navimatix)
The availability of health data for research studies is strictly limited by data privacy regulations. While restricting access to personal health information is key to avoid discrimination based on state of health and other abuse of this sensitive data. Nonetheless anonymized health data is essential e.g. to small and medium suppliers of medical devices, who have to fullfill post-market surveillance requirements but cannot afford regular repetition of clinical studies, but also for public health researchers interested e.g. in correlations of certain conditions. The AVATAR platform offers a possibility for researchers and study personnel to request anonymized data tailored to their research question from data holders. Data may be anonymized before leaving the data holders’ infrastructure. A further anonymization step is performed after combining the data of several data holders. The combined and anonymized data is then provided to the requesting person. Data transfers in the AVATAR platform are based on the standards of the international data space association (IDSA), which enables secure and souvereign federated data sharing using standard data space components. For requesting users we offer an model-based frontend to formulate their filters and restrictions on the data, depending on the underlying data model.
13:00 - 14:30 Uhr

Poster Session + Final Voting

14:30 - 15:45 Uhr

Poster Awards I

Top-3 Presentations from 2 Categories

16:00 - 17:15 Uhr

Poster Awards II

Top-3 Presentations from 2 Categories

17:15 - 18:15 Uhr

Meeting of the Steering Committe

Plenum

Jetzt anmelden!

Registrierung

28. und 29. Oktober

AXICA Kongress- und Tagungszentrum
Pariser Platz 3 10117 Berlin

Zur Registrierung