Predictive Modeling. Machine Learning. Artificial Intelligence. In the blink of an eye, cutting-edge computational innovations have transformed from unknown to ubiquitous. But what will be their impact on the United States justice system? The answer, according to two Harvard Law School civil rights experts, is complicated.

“We do not have the preparedness for what is already happening. We still do civil justice the way we did in 1906 … when people went to lectures in horse-and-buggy, and doctors were still using leeches, and we had child labor,” said Martha Minow, the 300th Anniversary University Professor, during an event titled “(Deep) Learning from the Bench: A Conversation on Algorithmic Fairness,” hosted by the Berkman Klein Center for Internet & Society. 

“Everything in the world has changed except how we resolve legal disputes,” she said.

Minow joined retired Canadian Supreme Court Justice and Harvard Law School Visiting Professor of Law Rosalie Silberman Abella in a conversation, moderated by S.J.D. candidate Maroussia Lévesque, which explored a series of key questions about algorithms and fairness:  

How reliable are these unprecedented tools?

How great are the risks they present?

And is the American legal system equipped to handle these new challenges?

Minow — whose article “Equality, Equity, and Algorithms: Learning from Justice Rosalie Abella” was recently published in the University of Toronto Law Journal — pointed to America’s equality-centric civil rights system as a significant disadvantage in the effort to uphold justice in a world of widespread automation. She explained that modern civil rights jurisprudence in the United States has historically sought to promote equality under the basic premise that justice demands every citizen should be treated the same.

“We do not have the preparedness for what is already happening. We still do civil justice the way we did in 1906 … when people went to lectures in horse-and-buggy, and doctors were still using leeches, and we had child labor,”

Martha Minow

Many countries inspired by the U.S. civil rights system, however, later seized the opportunity to improve upon the American model by establishing equity-centric systems where justice demands certain individuals should be treated differently based on their needs.

Abella noted how her observations while heading the Royal Commission on Equality in Employment (1983-1984) fed her motivation to establish a civil justice system in Canada focused on providing equity. 

“I understood that the American approach to equality was rooted in a theory that every individual has the same right as every other individual. It was a theory of equality as sameness,” said Justice Abella. “That notion struck me as useful for civil liberties, but it made no sense when you’re thinking about how different people are.” 

“I thought, if we treat everyone the same, the person in the wheelchair doesn’t get a ramp,” she said. “Women don’t get recognition for the fact that they are biologically the ones who have children. Persons who are not white don’t have recognition for the fact that they experience racism. And indigenous people don’t have recognition of their entire history of subordination.  So, I thought, equality really is about difference — it’s not about sameness.”

“The American approach almost entrenches inequality because it ignores how different people are and ignores the fact that they need different remedies based on their difference.  We [in Canada] accepted that systemic discrimination — the impact of acts rather than the intention — is what counts.”

Rosalie Abella

According to Abella, predictive analytics algorithms and machine learning likely pose more significant risks under equality-based approaches designed to prevent discriminatory actions rather than discriminatory impacts. Despite the common perception that automated programs provide impartial data, critics believe these innovative tools are nevertheless susceptible to bias as they are disproportionately designed by, chosen by, and operated by those already receiving fair treatment under the status quo.

“The American approach almost entrenches inequality because it ignores how different people are and ignores the fact that they need different remedies based on their difference.  We [in Canada] accepted that systemic discrimination — the impact of acts rather than the intention — is what counts,” said Abella. 

Minow acknowledged that mitigating the consequences of widespread automation will soon require a new wave of future lawmakers, regulators, and jurists who understand the dynamics at play — and the critical consequences at stake.

“Algorithms are being used to decide who’s eligible for a loan. They’re being used to decide should a child be removed from the parent’s home. They’re being used to decide where police go, whether someone should get bail.  They’re being used to decide the most weighty burdens on people’s lives and huge opportunities,” said Minow.  “And they should be analyzed, in my view, with the deepest awareness of the consequences and commitments, to make the judgements true to some idea that can be defended on the human rights grounds.”

The challenge, she said, is “that’s not always happening and, indeed, it’s often obscured because the discussion — if there is one — is put in mathematical terms or it’s treated as ‘Well, we can’t satisfy these different kinds of criteria because we have to satisfy this other criteria.’

“So, the first step is to make it explicit. Where are the choices? What data are being used?”

Beyond examining the ways automation could increase discriminatory impacts generally, Minow also elaborated on specific risks already affecting how the U.S. legal system functions. In particular, she highlighted the inadequacies of predictive technology in the context of criminal justice where many judges have used simplified “bail scores” designed to assess a variety of characteristics relevant to the decision whether to grant a criminal defendant pre-trial release. 

Bail algorithm factors typically include: the defendant’s age, the severity of current charges, court compliance record, criminal history, and the likelihood of rearrest upon release. In the process of combining these variables, judges can use predictive technology that quantifies the likelihood of rearrest upon release using a “COMPAS” [Correctional Offender Management Profiling for Alternative Sanctions] score — a recidivism-risk metric calculated via algorithm. 

So, what’s the problem?

“Well, one of the things that COMPAS scores measure is: ‘Where has the community decided to put police?’ The likelihood of being arrested is most related to where they put the police,” explained Minow. “And if the police are being located in communities that are disproportionately African American, [it’s] no surprise there are higher rates of arrest there.” But she said that phenomenon “is obscured by the use of the algorithm” which makes it seem as if people in those neighborhoods are inherently more likely to be arrested.

And will the criminal defendants impacted by discriminatory algorithms — such as the COMPAS recidivism score — have any form of relief? 

Although those facing discrimination from neutrally-applied algorithms would likely struggle to recover under existing standards, as moderator Maroussia Lévesque posited, judges could hypothetically help address the situation using judicial notice. Judicial notice — a measure typically used to confirm obvious, indisputable facts about time and geography — has nevertheless been applied in rare circumstances to confirm facts about social issues, such as the existence of discriminatory impacts from segregation.

When asked about the possibility of judges taking judicial notice in the context of predictive technology, however, Justice Abella was mostly dubious.

“Well, my first thought is I don’t think the judiciary or the country, any country, is ready yet for the idea of judges taking judicial notice of anything to do with algorithms,” she said. “I admit, I’m particularly backwards on questions of technology. But if I didn’t know what an algorithm meant two years ago … judicial notice is really only something that is so obvious that it requires no evidence. I think we’re far, far away from that.”

“But I think the points that Professor Minow raises that are really important about algorithms, as I understand it, are they’re rooted in a concept of neutrality and equality as neutrality,” added Justice Abella. “It’s a loaded term, neutral. What does that mean? And ‘facially neutral’ often means that if you apply it across the board, it’s going to affect some people differently depending on who they are, where they live, what their race is, what their sexual identity is.” 

Overall, Minow and Abella agreed that predictive and automated technologies possess the capacity to transform our world. But widespread adoption will require vigilant oversight and perhaps necessitate revisiting the fundamental goals underlying U.S. civil justice jurisprudence.


Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.