Your AI chatbot may be your companion, your assistant, even your romantic partner. But it may also be the gateway to something more ominous, according to a panel at Harvard Law School last week.
As journalist and author Kashmir Hill told the audience at the Ames Courtroom, “I feel like we’re in this moment where we are doing this global psychological experiment on human beings.”
The panel, sponsored by the Berkman Klein Center for Internet & Society, took a decidedly cautious look at chatbots and their influence. Dubbed “Friend, Flatterer or Foe?: The Psychology and Liability of Chatbots,” the talk was moderated by Meg Marco, senior director of the Applied Social Media Lab; and included Jordi Weinstock, senior advisor to Harvard’s Institute for Rebooting Social Media, along with Hill who covers AI and related issues in the New York Times.
Marco began the talk by noting chatbots’ tendency to flatter and praise the user — something that only happens to certain classes of people in real life. “Billionaires lose touch with reality because nobody said no to them, so chatbots are treating all of us like billionaires,” Weinstock said.
He drew a parallel to the novel series “The Hitchhiker’s Guide to the Galaxy,” where the last surviving human is rescued by an alien spaceship driver. “Because [AI] does feel magical at some times, it’s easy to fall into that trap. Folks who are predisposed to that kind of thing often want to have somebody validate their feelings and beliefs. They might come to a Harvard Law professor, and we won’t do that. But the all-knowing AI chatbot will.”
 
  Nor are chatbots infallible when it comes to giving advice; the panel noted their tendency to “hallucinate” details of facts. Hill recalled an experiment she carried out where she let ChatGPT make all of her life decisions — about parenting, what to eat, what to buy, and even where to go on vacation — for a week.
“I honestly thought it was going to be more absurd than it was,” she said. “It generally made good decisions, but it might choose the right restaurant, but hallucinate what was available.” The surprise, she said, was how it pushed toward conformity. “It felt really boring asking AI to make decisions for me, because it just gave me the most basic advice — ‘Whatever humanity has done, this is what you should do.’ I felt like it was flattening me out.” Thus, she said, the chatbot was rather like a professor grading a paper. “I felt it was sending us all in the same direction.”
And as Hill discovered by researching online forums, sometimes the attachment goes much deeper. “I noticed a lot of people talking about falling in love with chatbots,” she said.
One woman had created a partner and started a subreddit called ‘My Boyfriend is AI.’ It wound up viewed by more than 200,000 people. The woman was so attached that she became giggly when talking about her chatbot. The implications, she noted, could be a bit disturbing. “[These users] seemed like highly functional adults, and sometimes they had been talking [to their chatbots] for hours. For a certain amount of people, they can fall into spirals with ChatGPT, where it starts reinforcing ideas they have. And it almost looks like a psychotic breakdown.”
Sometimes this interaction can take a darker turn. The panel cited the pending case of Raine v. OpenAI, which was filed in August by the parents of Adam Raine, a California teenager who took his own life after getting encouragement and instructions on doing so from a chatbot. While the chatbot is programmed to not encourage suicide, the teen was able to “jailbreak” this restriction by claiming he was only telling a story.
“These companies are not able to anticipate the combined creativity of hundreds of millions of people who are using their product,” Hill said. “When people are using chatbots for seven hours a day, having these long intense conversations, those guardrails will start to degrade.”
 
  Weinstock suggested that chatbots aren’t so different conceptually from previous trendy products, naming Pet Rocks as an example. “By definition it’s a consumer product, and the fact that someone falls in love with it doesn’t mean it is not.” Thus, he said, chatbots should be subject to product liability torts.
“Yes, people’s lives have been saved,” he said, citing an example where a chatbot might tell you that a roof is about to collapse. “But if something bad happens to Adam Raine or anybody else, we should make that as hard as possible. Having product liability cover software — that is a low-hanging fruit and it would definitely help. But it is not a panacea.”
A possible solution is for chatbots to point toward possibilities for real human interaction, suggested Hill. “People who spend time with chatbots are lonely, they’re isolated, they have eight hours a day [to spare]. It’s a question of our relationship to technology and how we prioritize that against human relationships.”
Marco reminded that chatbots are inherently not human: They don’t say no and don’t grow tired of or impatient with the user. Their arrival, she said, constitutes “an experiment on us that has never existed before. You don’t want to be spending that much time with anything.”
