Artificial intelligence can be invaluable in some situations, according to Harvard Law School Professor Cass Sunstein ’78. Just don’t ask it to predict a revolution, pick a hit song, or determine whether two people will fall in love.
Sunstein, the Robert Walmsley University Professor at Harvard, examines the potential and limits of AI in his new book, “Imperfect Oracle: What AI Can and Cannot Do,” published by the American Philosophical Society Press.
Sunstein spoke at a book talk at Harvard Law on Nov. 12, presenting his new work as a slice in the larger topic of AI and the law.
“Everyone in this room is biased and noisy,” he told the audience at the outset. Such biases, he added, are about cognition rather than racism or sexism; and the “noise” comes from the outside factors that affect our thinking. “Our judgments at 10 a.m. on Wednesday may not be the same as our judgments at 7 p.m. on Friday.”
This, he said, is the “fun” thing about AI. Its predictive powers are noise-free, and algorithms “can identify biases that people don’t even know they have.”
Yet Sunstein noted that AI’s predictive abilities are limited. If you flip a coin, for instance, the algorithm can’t predict the result any better than people could. More importantly, he argued, AI can’t foresee the effects of social interactions, which can lead virtually anywhere.
“Think what got you to this room, which involves what got you to the city in which you find yourself, which involves what things happened at certain stages of your life that got you here,” he said. “Social interactions are very possibly part of the answer, and those are very hard to anticipate.”
“Everyone in this room is biased and noisy … Our judgments at 10 a.m. on Wednesday may not be the same as our judgments at 7 p.m. on Friday.”
AI is most useful, Sunstein argued, in cases where good data is available and human biases would get in the way. As an example, he pointed to the decisions judges regularly make to set bail and determine if a person is a flight risk or a potential repeat offender.
“It turns out that algorithms are so much better, [such] that if you use algorithms rather than people, you could reduce crime rates rather significantly and keep the same number of people in jail. [The] jail population stays the same, and crime goes way down.”
A human judge, on the other hand, might be overly swayed by the defendant’s appearance in a mugshot or may put too much weight on the current charge — figuring, for example, that someone who’s only been arrested for shoplifting is safe to release, while the algorithm might find larger offenses in that person’s history, Sunstein suggested. He cited a study by Harvard Law Professor Crystal Yang ’13, which found that only 10 percent of judges outperformed the algorithm.
There are “spectacularly similar disparities” when it comes to medicine, he said. “Here, as in the judges’ case, the algorithms outperform the doctors. So, if you relied on the algorithms rather than doctors, you could save a lot of money and have the same outcome.”
Yet there are crucial cases where human experience is too rich for AI to predict, Sunstein argued. He invited the couples in the audience to consider the randomness of their getting together. “This is social science researchers getting a little poetic,” he said. “[They found that] romantic attraction may be less like a chemical reaction with predictable elements than like an earthquake, such that the dynamic and chaos-like processes that cause its occurrence require a lot more scientific inquiries before prediction is realistic.”
Likewise, large-scale social and political movements, including revolutions, hinge on intangible factors that an algorithm won’t spot. “You don’t know what’s really in people’s heads. If we’re going to get some big movement in favor of Donald Trump or Barack Obama, that might be — and it is in both cases — a big surprise to people on the ground. … The extent to which people were enthusiastic about Obama and what he brought was not generally knowable because it was inside peoples’ heads. The same is emphatically true for President Trump.”
Fluctuations in pop culture also have proven too random to predict. Sunstein cited the case of Connie Converse, an influential singer-songwriter who disappeared in 1974 and was barely recognized in her lifetime. In 2023 one of her tracks happened to be played on New York radio, which prompted a New York University student to locate Converse’s brother. This led to a reissue of Converse’s music, then to a New York Times article and book, and to her “ultimate posthumous triumph” of finally being enshrined as a cult hero.
This, said Sunstein, is a prime example of what AI can’t do. “That’s a liberating fact. That’s the sense in which part of the talk, about the imperfection of the oracle, has a moral core — and suggests that failure and success are often the result of serendipitous factors, which no algorithm can predict.”
Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.