There was the phony robocall from President Joe Biden asking New Hampshire voters not to cast a primary ballot. There was the doctored image of Taylor Swift endorsing Donald Trump, and the fake Kamala Harris ad rife with misleading information. But did that content, altered with the help of artificial intelligence, change voters’ minds in this election?

It’s likely too soon to tell, according to two legal experts well-versed in law and politics in the United States and the United Kingdom who discussed the impact of AI on justice systems and democracy at a recent Harvard Law School talk, “AI, the Law, and the 2024 Election.”

During the lunchtime discussion, Nicholas Stephanopoulos, Kirkland & Ellis Professor of Law at Harvard Law School, and Sir Robert Buckland, former lord chancellor and justice secretary of the United Kingdom, agreed that AI can help streamline some legal processes as long as it is used with caution and careful human oversight. They also acknowledged that the impact of AI on the U.S. presidential election and its potential to affect future races need to be studied in greater detail.

“The known examples of AI-produced disinformation in the 2024 election are pretty paltry,” said Stephanopoulos, who has spoken and written about the voting shifts among the American electorate based on income, education level, race, and geography. He said it was too soon to know if AI was also a factor in helping drive voter behavior more broadly or whether the impact of AI-generated falsehoods was “greater than or different from the historical impact of “regular old political lies” spread in the days before social media.

While open to being convinced, Stephanopoulos said at the Nov. 20 session that he wanted “to see a lot more proof before I would leap to regulations of AI that I wouldn’t support for newspapers or television or speeches by politicians. This could be some kind of paradigm shift, but I don’t think that’s evident at this point, and so I would advise caution until we have more information.”

Buckland agreed, adding the caveat that people are becoming skeptical of everything due to the “background noise” of disinformation. “Whether it’s real or not, perception is everything. And there’s a whole cadre of people out there who will just not believe anything they hear or see, even though it’s patently, accurately true. And I think that’s deeply worrying,” said Buckland. “That means there’s a whole section of people [who] are very hard to reach.”

Discussing how AI might ease the severe backlog of cases in Britain and the burden on overstretched judges, Buckland doesn’t consider technology “a quick fix” but thinks its use “in a measured, ethical way could well help in the administration justice quite significantly.”

“There’s a whole cadre of people out there who will just not believe anything they hear or see, even though it’s patently, accurately true. And I think that’s deeply worrying. … That means there’s a whole section of people [who] are very hard to reach.”

Sir Robert Buckland

Buckland, who is currently a senior fellow at Harvard Kennedy School studying the impact of AI and machine learning on the ethics of administrative justice, said he sees augmented decision-making — the use of technology to supply analysis, facts, and recommendations to decision makers — as one possible way forward. “Sentencing now in England and Wales is quite a formulaic exercise with guidelines that you have to follow. It’s a bit like a decision tree, and that can take time. Immediately, I think minds are turning to whether or not augmented decision-making can indeed help speed up that process for busy judges.”

And because such technology isn’t “classic machine learning,” it’s free from biased data, and false results known as AI hallucinations, Buckland added. But he also urged caution, noting that he has “already seen with older technologies how, with the human tendency to devolve and forget, [to] just say ‘the machine will do it,’ miscarriages of justice can then happen.” He cited the infamous British Post Office scandal from the 2000s that saw hundreds of postal workers wrongly prosecuted or convicted of financial mismanagement due to malfunctioning accounting software. “And yet the stance of the post office was to say, ‘There’s nothing wrong with this system.’”

Though human oversight of technology is critical, he added that, with proper supervision and transparency, basic algorithms could be effective in helping decide minor civil cases, as long as defendants are made aware of their use and given the chance to appeal.

Stephanopoulos agreed that technology can play a role in some areas but worried that it’s far from an effective replacement for critical thinking in interpreting and applying the law. He referenced a recent paper in which researchers employed an AI model to decide an actual case taken from the International Criminal Tribunal for the Former Yugoslavia. Stephanopoulos said that regardless of the commands or sympathetic facts about the defendant that researchers fed the model, the response was always the same.

“Even when instructed to be a merciful, kind judge focused on practical consequences for the community, still the AI didn’t want to deviate away from the plain text of the statute and the precedent. [It] shows us that it’s not easy at all to get an AI to behave the way that a human judge does. Maybe that’s bad, maybe that’s good, but you can’t just clearly emulate human judging, at least at present, with AI.”

Stephanopoulos does back some forms of AI support, citing research that suggests certain algorithms are better than humans at making bail decisions. He pointed to work by Harvard Law School Professor Crystal Yang ’13, who found that 90 percent of judges in her sample who used their discretion to override an algorithm that predicted how likely someone was to engage in misconduct, if released while awaiting trial, underperformed the computerized recommendations.

“The algorithm was able to strike a better balance between minimizing imprisonment and also maximizing public safety than all but 5 or 10 percent of human judges,” said Stephanopoulos. And while one can’t ignore the risks or the need for improvements and regulations in AI, algorithms, and large language models, he added, “Once we think that we’re getting progress over the status quo, we should leap ahead.”

Yet combatting the types of lies that permeate social media and can undercut order remains challenging. The United Kingdom recently passed the Online Safety Act along with national security legislation, but Buckland worries the former doesn’t focus enough on the threat to democracy, and the latter doesn’t adequately address the risk from non-state actors generating and spreading false information. Buckland said enlisting the corporate sector and using due diligence are critical parts of any solution.

“I think the work that Adobe and others are doing on content authentication, watermarking, all those things that prevent the problem from happening at the source, seem to me the best way,” said Buckland, “plus good old-fashioned fact checking,” with the help of AI.

The 45-minute discussion, sponsored by the Harvard Law Students Irish Heritage Association, also touched on the importance of keeping defendants’ identities in some cases out of the press, the threat of future political polarization, whether digital technologies should be held to a higher standard than “the human element,” and the need to address bias in historical data used in generative AI models.

When it comes to using AI to help streamline legal processes, Buckland urged his listeners to remember that technology could prove an effective tool with effective oversight, as long as the law continues to develop in a “human way alongside society’s development.”

“Justice is not justice if it is just a desiccated, calculating machine,” said Buckland. “It has to reflect the changing morals of our society.”


Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.