Sandra Wachter

Visiting Associate Professor of Law

Spring 2020

Areeda 130

617-998-1023

Assistant: Miriam Silva / 617-496-1755

Biography

Sandra Wachter is an Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford. She is a Visiting Professor at Harvard Law School to work on her current British Academy project "AI and the Right to Reasonable Algorithmic Inferences", aiming to find mechanisms that provide greater protection to the right to privacy and identity, and against algorithmic discrimination.

Sandra is also a Fellow at the Alan Turing Institute in London, a Fellow of the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, a Member of the European Commission’s Expert Group on Autonomous Cars, an Academic Affiliate at the Bonavero Institute of Human Rights at Oxford’s Law Faculty and a member of the Law Committee of the IEEE. Previously, Sandra worked at the Royal Academy of Engineering and at the Austrian Ministry of Health.

Sandra specialises in technology-, IP-, and data protection law as well as European-, International-, human rights (online) and medical law. Her current research focuses on the legal and ethical implications of AI, Big Data, and robotics as well as profiling, inferential analytics, explainable AI, algorithmic bias, governmental surveillance, predictive policing, and human rights online.

Sandra works on the governance and ethical design of algorithms, including the development of standards to open-up the ‘AI Blackbox’ and to enhance algorithmic accountability, transparency, and explainability. Sandra also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Group privacy, autonomy, and identity protection in profiling and inferential analytics are also on her research agenda.

Sandra is also interested in legal and ethical aspects of robotics (e.g. surgical, domestic and social robots) and autonomous systems (e.g. autonomous and connected cars), including liability, accountability, and privacy issues as well as international policies and regulatory responses to the social and ethical consequences of automation (e.g. future of the workforce, workers’ rights).

Sandra serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies. Her work has been featured in (among others) Forbes, Harvard Business Review, The Guardian, BBC, The Telegraph, Financial Times, Wired, CBC, Huffington Post, Science, Nature, New Scientist, FAZ, Die Zeit, Le Monde, HBO, Engadget, El Mundo, The Sunday Times, The Verge, Vice Magazine, Sueddeutsche Zeitung, and SRF.

In 2019, she won the Privacy Law Scholar (PLSC) Junior Scholars Award (pre-tenure) for her paper “A right to reasonable inferences.” In 2018 she won the ‘O2RB Excellence in Impact Award’ and in 2017 the CognitionX ‘AI superhero Award’ for her contributions to AI governance.

Sandra studied at the University of Oxford (MSc) and the Law Faculty (Mag. iur. and PhD iur.) at the University of Vienna.

Areas of Interest

Sandra Wachter & Brent Mittelstadt, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, 2019 Colum. Bus. L. Rev. 494.
Categories:
Technology & Law
,
International, Foreign & Comparative Law
Sub-Categories:
European Law
,
Digital Property
,
Information Privacy & Security
,
Networked Society
,
Intellectual Property Law
,
Cyberlaw
Type: Article
Abstract
Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice (ECJ). This Article shows that individuals are granted little control or oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively “economy class” personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Articles 13–15), rectify (Article 16), delete (Article 17), object to (Article 21), or port (Article 20) personal data are significantly curtailed for inferences. The GDPR also provides insufficient protection against sensitive inferences (Article 9) or remedies to challenge inferences or important decisions based on them (Article 22(3)). This situation is not accidental. In standing jurisprudence the ECJ has consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent. Current policy proposals addressing privacy protection (the ePrivacy Regulation and the EU Digital Content Directive) and Europe’s new Copyright Directive and Trade Secrets Directive also fail to close the GDPR’s accountability gaps concerning inferences. This Article argues that a new data protection right, the “right to reasonable inferences,” is needed to help close the accountability gap currently posed by “high risk inferences,” meaning inferences drawn from Big Data analytics that damage privacy or reputation, or have low verifiability in the sense of being predictive or opinion-based while being used in important decisions. This right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data form a normatively acceptable basis from which to draw inferences; (2) why these inferences are relevant and normatively acceptable for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged.
Sandra Wachter, Brent Mittelstadt & Chris Russell, Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, 31 Harv. J.L. & Tech. 841 (2018).
Categories:
International, Foreign & Comparative Law
,
Technology & Law
Sub-Categories:
European Law
,
Networked Society
,
Information Privacy & Security
,
Digital Property
,
Cyberlaw
,
Intellectual Property Law
Type: Article
Abstract
There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Data controllers have an interest to not disclose information about their algorithms that contains trade secrets, violates the rights and freedoms of others (e.g. privacy), or allows data subjects to game or manipulate decision-making. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR, and the extent to which they hinge on opening the ‘black box’. We suggest data controllers should offer a particular type of explanation, ‘unconditional counterfactual explanations’, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the “closest possible world.” As multiple variables or sets of variables can lead to one or more desirable outcomes, multiple counterfactual explanations can be provided, corresponding to different choices of nearby possible worlds for which the counterfactual holds. Counterfactuals describe a dependency on the external facts that lead to that decision without the need to convey the internal state or logic of an algorithm. As a result, counterfactuals serve as a minimal solution that bypasses the current technical limitations of interpretability, while striking a balance between transparency and the rights and freedoms of others (e.g. privacy, trade secrets).
Sandra Wachter, Brent Mittelstadt & Luciano Floridi, Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, 7 Int'l Data Privacy L. 76 (2017).
Categories:
International, Foreign & Comparative Law
,
Technology & Law
Sub-Categories:
European Law
,
Cyberlaw
,
Digital Property
,
Information Privacy & Security
,
Networked Society
Type: Article
Abstract
Since approval of the European Union General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR once it is in force, in 2018. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13–15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. The ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless. We propose a number of legislative steps that, if implemented, may improve the transparency and accountability of automated decision-making when the GDPR comes into force in 2018.

Education History

Current Courses

Course Catalog View