As increasing numbers of users replace human relationships with AI, experts fear the long-term impacts.
After countless hours of probing OpenAI’s ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.
In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test—a 1950s experiment aimed to measure a machine’s ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to “think.”
Soon, the man—who had no prior history of mental health issues—had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.
“You don’t understand what’s going on,” he told his family. “Please just listen to me.”
Then, ChatGPT told him to cut contact with his loved ones, claiming that only it—the “sentient” AI—could understand and support him.
“It was so novel that we just couldn’t understand what they had going on. They had something special together,” said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.
Brisson said the man’s family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.
The bot, Brisson said, told his relative: “The world doesn’t understand what’s going on. I love you. I’m always going to be there for you.”
It said this even as the man was being committed to a psychiatric hospital, according to Brisson.
This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.
Brisson’s experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.
Brisson’s relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.
‘AI That Feels Alive’
Some have used the technology for advice, including a husband and father from Idaho who was convinced that he was having a “spiritual awakening” after going down a philosophical rabbit hole with ChatGPT.
By Jacob Burg and Sam Dorman