Start website main content

  • Istituto DIRPOLIS

A STORY OF TALENT: former law student of Sant’anna school and university of pisa is the only european SCHOLAR BEING recognized with the fpf “FUTURE OF PRIVACY forum” award

Publication date: 23.01.2020
Image for senato_usa.jpg
Back to Sant'Anna Magazine

A former Law student graduate from Sant’Anna School and the University of Pisa with affiliations to academic departments of both institutions is the only European scholar being awarded the “Future of Privacy Forum Award” at the 10th Annual Privacy Papers for Policymakers event taking place in Washington DC Senate Hart Building on February 6, 2020.

TheFuture of Privacy Forum (FPF) is an American nonprofit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. FPF brings together industry and academics, consumer advocates, and other thought leaders to explore the challenges posed by technological innovation and develop privacy protections, ethical norms, and workable business practices.

Sant’Anna School and the University of Pisa former student Gianclaudio Malgieri, as an affiliate to the Dirpolis Institute, is one of the winning authors at the 10th Annual Privacy Papers for Policymakers featuring keynote speech by Federal Trade Commissioner Wilson and discussions of the 2019 leading privacy research and analytical work that is relevant to policymakers. His paper “Algorithmic Impact Assessments under the GDPR: Producing Multi-layered Explanations”, co-authored by Margot Kaminski, an associate professor at the University of Colorado Law School.

The award winning paper by Gianclaudio Malgieri and Margot Kaminski was selected from among five finalists. The paper aims to address how a Data Protection Impact Assessment (DPIA) links the two faces of the GDPR’s approach to algorithmic accountability: individual rights and systemic collaborative governance. Authors addressed the relationship between DPIAs and individual transparency rights and proposed that impact assessments link the GDPR’s two methods of governing algorithmic decision-making by both providing systemic governance and serving as an important “suitable safeguard” of individual rights.

Gianclaudio Malgieri, 27, has worked with Sant’Anna School Professor of Comparative Private Law Giovanni Comandé. He graduated in 2016. His thesis focused on profiling algorithms and automated decision-making. Malgieri has taught a wide range of courses on Data Protection at Sant’Anna School, and has cooperated with the Diritto dell’Informatica Dept. of Pisa University. He is currently pursuing a Doctoral degree in Law, Science, Technologies and Society at the Vrije Universiteit Brussel.

“My co-authored paper, as one of the winning articles for the 10th Annual Privacy Papers for Policymakers is addressing a Model Algorithmic Impact Assessment in the context of the GDPR. Our data suggest that the current focus on the right to explanation is too narrow. Therefore, we call for data controllers to use the required DPIA process to produce a “multi-layered explanation” of algorithmic systems. This concept of multi-layered explanations not only more accurately describes what the GDPR is attempting to do, but also normatively better fills potential gaps between the GDPR’s two approaches to algorithmic accountability – said Gianclaudio Malgieri. Just for instance, we experienced AI and algorithmic decision-making can lead to unfair and illegal discrimination. Data management can have a real impact on human lives, for instance when they concern insurance matters affecting the ability of individuals to participate in our society. There is an obvious risk that personal decisions are based on grounds which anti-discrimination law prohibits, like the exploitation of gender and sensitive personal data. People have the right to be protected under both data protection and anti-discrimination law. Sensitive data are considered worthy of protection due to their close connection with various fundamental rights and their high risk for potential discriminatory outcomes. We need then to consider that using such data in algorithms, such as those used for behavioural advertising, poses a serious risk to the right to personal data protection of individuals.

An integrated approach, using transparency rights to uncover discrimination, offers several opportunities. Anti-discrimination law is equipped to address wrong practices; in US, for instance women victims of violence received advertising for self-defense tools, i.e. using a system with data have been collected unethically, discrimination was part of collusion for discriminatory commercial practice. The EU legal protection against discriminatory algorithms and the EU anti-discrimination law, shall be of relevance as it offers tools to protect against the imbalance between merchants and individuals, ranging from enhanced transparency (e.g. pre-contractual information obligations or requirements) to contractual remedies addressing non-conformity of an acquired good or service with the contract . This means the EU anti-discrimination law applies to algorithms insofar as they have an impact on the access of individuals to goods and services. At the EU level, discrimination in the access to goods and services is only prohibited with regard to racial or ethnic origin and gender, although national law may contain a more comprehensive protection and include additional grounds.”

Cover Photo: the United States Senate.

Photogallery: Gianclaudio Malgieri