From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?
A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.
“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.
The authors are able to assess the impact of AI on human rights in each use case by identifying the human rights implications of the systems that predated the introduction of AI, and contrasting this finding with how AI is changing human rights implications of the current situation. Among others, a few of the human rights evaluated in the report are the right to freedom from discrimination, the right to education, and the right to an adequate living standard. Levin Kim, an author on the report, also spearheaded the creation of a visualization that illustrates the connections between the rights and use cases studied. The results of the analysis aren’t always clear-cut, however.
[hero-image src=”https://today.law.harvard.edu/wp-content/uploads/2018/09/Image-2018-09-27-at-4.11.24-PM.png”]
“It was striking to me that there are profound distributive effects; for every single one of our use cases, we could identify rights holders that were positively impacted by AI, and other rights holders who were negatively impacted, often on the same right. That indeterminacy was really interesting, that was not what I expected,” says Vivek Krishnamurthy, an author on the report and an affiliate at BKC.
For an illustrative example, look at the human rights impacts of using AI to decide whether to extend credit to an individual. On the one hand, the increase of available data for use in credit scoring may ultimately harm some individuals by perpetuating biases and discrimination. For others, such as individuals in developing countries who lack a long trail of traditional credit data, these AI systems afford access to credit that may otherwise have been unavailable.
This latter finding, that AI may provide access to credit for groups previously excluded, was “pleasantly surprising” to report author Filippo Raso ’18. “Prior to this report, I was unaware of the limitations of traditional credit scoring algorithms,” explains Raso, who hopes to continue his work on AI as he joins a prominent Washington, D.C. law firm this fall. “With artificial intelligence, many more people may gain access to credit and, by extension, to improvements in their standard of living. That’s not to say artificial intelligence is a panacea to all the problems of credit scoring—in fact, it introduces new challenges. But it is an improvement from the existing system in many ways.”
One benefit of looking at the contextual environment prior to deploying AI in each area, as the report does, is that it highlights the preexisting conditions that might otherwise have been misattributed to AI, the authors point out.
“It’s important to acknowledge that there’s this background, we’re not starting with a blank slate, and there’s lots of history here. In fact, many of the problems the introduction of technology is going to cause is because it’s automating bad human processes,” Krishnamurthy says. “Do you blame the AI system for passing racist sentences, or do you blame hundreds of years of human judges who have been discriminating against certain segments of the population?”
This mindset also offers a chance to reflect on these institutional systems. “The introduction of AI into these [existing systems] presents a unique opportunity to reassess the values we are institutionalizing. If we have the chance to make the system more fair, or to enhance a specific right for more people by using AI, should we? And can we do that within the existing legal framework?” asks Hannah Hilligoss, also an author on the paper.
The BKC team based their evaluation of the human rights impacts of AI on the Universal Declaration of Human Rights (UDHR) and the United Nations Guiding Principles on Business and Human Rights (UNGP).
Since many AI tools are developed through private industry, the authors emphasize that businesses involved in the creation and deployment of AI should conduct due diligence as mandated by the UNGP, and respect the human rights that are enshrined in the UDHR.
“As code, written predominantly by the private-sector, becomes more powerful and exert more influence over everyday life, the fairness of these tools must be ensured. The [UNGP] takes center stage in defining private obligations to respect human rights, regardless of national law,” Raso says. “The [UNGP] will only grow in relevance.”
As part of that reflection, the team hopes its report’s framework will be useful for thinking through the potential risks and benefits.
“I think that the human rights framework is a very useful way for thinking about social consequences of business activities. I would be gratified if that existing framework, which is so powerful, is used by private sectors in deploying systems,” Krishnamurthy says.
Bavitz agrees that the framework proposed by the report has the potential to have a considerable impact on the field. “I do think that this is a huge topic that’s going to be the subject of a lot of conversations among major human rights organizations, and that we have an opportunity here, in coordination with some of the actors in [the Canadian] government that have been really forward-thinking thinking about it, to really set out an early framework to think about these issues,” Bavitz says. “Maybe it gets changed, it gets modified, or it changes over time, but I think we’re out ahead of this a little bit, and that’s really exciting.”
Related
Risk assessment tools for criminal justice reform: A Q&A with Chris Bavitz
From Berkman Klein, new resources promoting inclusion in design of AI