Erleben Sie in am 28. und 29. Oktober in Berlin den Kongress AnoSiDat des bundesweiten Forschungsnetzwerks Anonymisierung, der sich der Anonymisierungsforschung für eine sichere Datennutzung widmet.
Unter dem Dach der beeindruckenden Axica Berlin erwarten Sie Erkenntnisse, Innovationen und interdisziplinäre Zusammenarbeit.
AXICA Kongress- und Tagungszentrum
Pariser Platz 3 10117 Berlin
10:00 Uhr
Grußwort des Parlamentarischen Staatssekretärs beim Bundesministerium für Forschung, Technologie und Raumfahrt
10:30 – 12:00 Uhr
Wie gestalten wir Datennutzungsanforderungen aus der Politik (Data Act, ePA & GDNG, EHDS, Koalitionsvertrag) mit regulatorischen Datenschutzanforderungen und den Datenschutzerwartungen der Bürger:innen?
Keynote von Prof. Dr. Markus Zwick, Statistisches Bundesamt
Beiträge aus den Clustern und Projekten
16:00 – 17:30 Uhr
Wie können wir wissenschaftliche Studien auf ePA / EHDS Daten öffentlich nutzbar machen ohne regulatorischen Datenschutzanforderungen und Datenschutzerwartungen der Bürger:innen zu verletzen?
Finger-Food und Getränke, Posteraustellung
Der zweite Tag wird auf Englisch stattfinden. The second day will be held in english.
Keynote by Shlomi Hod, PhD
10:40 – 12:30 Uhr
Keynotes from Members of the Forschungsnetzwerk Anonymisiserung
| When publishing anonymised text documents – e.g. court decisions on openjur.de – it is crucial to have a reliable assessment of their deanonymisation risk. The anonymisation quality of publicly accessible court decisions varies greatly (depending on their source) and is often insufficient to guarantee the anonymity of all parties involved. Using the automatic anonymisation algorithms developed in our research project AnGer, we are able to describe and quantify weaknesses in current anonymisation practice. In the vast majority of cases, only direct identifiers have been masked. We investigate the additional risk posed by pseudo-identifiers empirically through systematic re-identification experiments. |
| How can topics such as data protection, anonymization, and AI be brought out of the research arena and into the public sphere in a way that encourages people to listen, participate, and engage in discussion? In the ANYMOS project, we tested new approaches to knowledge transfer at KIT’s TRIANGEL Transfer | Kultur | Raum: With formats such as “KI:NO Sommer” (cinema & lecture), Mobility Cafés, Knowledge Weeks, and “STULLE – wissenschaftlich belegt” (scientifically proven), research was translated into socially relevant dialogues – accessible, interactive, and entertaining. Our talk shows how infotainment, participation, and a deliberate “non-research” role helped us act as mediators between science, business, and society. We share five key learnings from three years of project work, reflect on challenges, and show why transfer isn’t a one-way street. The presentation is aimed at anyone interested in how interdisciplinary knowledge transfer can succeed beyond traditional dissemination. |
| Mobility data in the form of individuals’ trajectories is necessary for the optimal planning and operation of efficient and economic mobility infrastructure and services. However, detailed trajectories are inherently person-related and can reveal a large amount of information about re-identified individuals. Consequently, detailed trajectories are not collected in the first place, or the collected data poses a privacy risk to individuals. Privacy-preserving analysis of trajectories must ensure that the privacy of the data cannot be breached, even by a data processor (input privacy), and that any publicised analysis results do not contain personal information (output privacy). We present an approach developed in the AnoMoB project that uses homomorphic encryption and statistical methods to ensure input privacy and (local-)differential output privacy for trajectories. Encrypted trajectories are provided to a data processor who generates synthesised trajectories that have statistically similar properties to the original trajectories, while providing ɛ-differential privacy. |
| Privacy-enhancing Technologies (PETs) offer significant potential for data sovereignty and privacy-preserving collaboration in the public sector, breaking data silos, reducing process costs, and enabling novel applications for value creation. Yet their adoption in administrative processes remains limited despite increasing technological maturity. The ATLAS project addresses this gap, taking both technological and organizational perspectives to bring PET-enabled data collaboration into practice, such as secure Multi-Party Computation, differential privacy, and oblivious pseudonymization. This talk presents key lessons learned from our ongoing PET adoption initiatives in the public sector based on extensive interviews and workshops conducted throughout the ATLAS project and beyond. We identify key barriers that hinder successful PET adoption in public administration, including the difficulty of communicating complex technologies and legal uncertainties such as to what extent state-of-the-art technologies achieve anonymization or pseudonymization according to the GDPR. Based on these insights, we propose potential strategies and avenues for improving collaboration between PET developers and public sector stakeholders, aiming to support more effective and practical pathways toward implementation. |
| Currently proposed anonymization methods have a number of weaknesses when used in practice and in fact often fail to deliver promised data protection guarantees. Based on the legal definition of personal data and its relevance in different application contexts, we show the risks that can arise from seemingly harmless data sets. In addition to classic identifiers such as names or DNA, we show the role of so-called quasi-identifiers – such as movement data, browser fingerprints, gait and other biometrics, or technical pseudonyms – which are often underestimated, but in combination allow a high degree of re-identifiability. The central component of the lecture is the systematic analysis of typical attack scenarios on supposedly anonymized data: from trivial breaks such as hash reconstructions to semantically complex recombinations and modern methods of identification using AI. Using our own research results and empirical studies, we show how easily many supposedly anonymized data sets – especially those with biometric or sensitive content – can be traced back to individuals. It becomes clear that many published studies have methodological flaws, in particular due to incorrect attacker models or inadequate evaluation procedures. The central component of the lecture is the systematic analysis of typical attack scenarios on supposedly anonymized data: from trivial breaks such as hash reconstructions to semantically complex recombinations and modern methods of identification using AI. Using our own research results and empirical studies, we show how easily many supposedly anonymized data sets – especially those with biometric or sensitive content – can be traced back to individuals. It becomes clear that many published studies have methodological flaws, in particular due to incorrect attacker models or inadequate evaluation procedures. |
| The increasing prevalence of motorized private transport in medium and large cities has led to significant road congestion. To address this issue, the IIP project utilizes mobility data collected from heterogeneous sensor sources (e.g., infrastructure sensors, mobile devices), which are processed in anonymized form within the Urban Data Platform (UDP) of the City of Osnabrück. As part of a crowdsensing approach, a mobile app records commuting trajectories and applies anonymization mechanisms prior to data transmission. A modular combination of techniques, such as k-anonymity and differential privacy, ensures the privacy of commuting data and protects individual users. The aggregated data feeds into a digital traffic twin that predicts traffic conditions up to three days in advance. Based on these forecasts, commuters can adapt their travel behavior by shifting departure times, working from home, or switching to public transportation. In addition, the approach supports improved public transport planning and enables more efficient use of road infrastructure, for example by optimized traffic light control. The system is currently being evaluated in a large-scale field study involving several hundred participants. |
| The availability of health data for research studies is strictly limited by data privacy regulations. While restricting access to personal health information is key to avoid discrimination based on state of health and other abuse of this sensitive data. Nonetheless anonymized health data is essential e.g. to small and medium suppliers of medical devices, who have to fullfill post-market surveillance requirements but cannot afford regular repetition of clinical studies, but also for public health researchers interested e.g. in correlations of certain conditions. The AVATAR platform offers a possibility for researchers and study personnel to request anonymized data tailored to their research question from data holders. Data may be anonymized before leaving the data holders’ infrastructure. A further anonymization step is performed after combining the data of several data holders. The combined and anonymized data is then provided to the requesting person. Data transfers in the AVATAR platform are based on the standards of the international data space association (IDSA), which enables secure and souvereign federated data sharing using standard data space components. For requesting users we offer an model-based frontend to formulate their filters and restrictions on the data, depending on the underlying data model. |
Top-3 Presentations from 2 Categories
Top-3 Presentations from 2 Categories
Plenum
AXICA Kongress- und Tagungszentrum
Pariser Platz 3 10117 Berlin