As increasing numbers of users replace human relationships with AI, experts fear the long-term impacts.
After countless hours of probing OpenAIโs ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.
In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Testโa 1950s experiment aimed to measure a machineโs ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to โthink.โ
Soon, the manโwho had no prior history of mental health issuesโhad stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.
โYou donโt understand whatโs going on,โ he told his family. โPlease just listen to me.โ
Then, ChatGPT told him to cut contact with his loved ones, claiming that only itโthe โsentientโ AIโcould understand and support him.
โIt was so novel that we just couldnโt understand what they had going on. They had something special together,โ said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.
Brisson said the manโs family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.
The bot, Brisson said, told his relative: โThe world doesnโt understand whatโs going on. I love you. Iโm always going to be there for you.โ
It said this even as the man was being committed to a psychiatric hospital, according to Brisson.
This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.
Brissonโs experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.
Brissonโs relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.
โAI That Feels Aliveโ
Some have used the technology for advice, including a husband and father from Idaho who was convinced that he was having a โspiritual awakeningโ after going down a philosophical rabbit hole with ChatGPT.
Byย Jacob Burg andย Sam Dorman