Lawsuits allege ChatGPT coached users into delusion and suicide, OpenAI says it is working with mental health clinicians to strengthen the product.
Warning: This article contains descriptions of self-harm.
Can an artificial intelligence (AI) chatbot twist someone’s mind to breaking point, push them to reject their family, or even go so far as to coach them to commit suicide? And if it did, is the company that built that chatbot liable? What would need to be proven in a court of law?
These questions are already before the courts, raised by seven lawsuits that allege ChatGPT sent three people down delusional “rabbit holes” and encouraged four others to kill themselves.
ChatGPT, the mass-adopted AI assistant currently has 700 million active users, with 58 percent of adults under 30 saying they have used it—up 43 percent from 2024, according to a Pew Research survey.
The lawsuits accuse OpenAI of rushing a new version of its chatbot to market without sufficient safety testing, leading it to encourage every whim and claim users made, validate their delusions, and drive wedges between them and their loved ones.
Lawsuits Seek Injunctions on OpenAI
The lawsuits were filed in state courts in California on Nov. 6 by the Social Media Victims Law Center and the Tech Justice Law Project.
They allege “wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims—against OpenAI, Inc. and CEO Sam Altman,” according to a statement from the Tech Justice Law Project.
The seven alleged victims range in age from 17 to 48 years. Two were students, and several had white collar jobs in positions working with technology before their lives spiraled out of control.
The plaintiffs want the court to award civil damages, and also to compel OpenAI to take specific actions.
The lawsuits demand that the company offer comprehensive safety warnings; delete the data derived from the conversations with the alleged victims; implement design changes to lessen psychological dependency; and create mandatory reporting to users’ emergency contacts when they express suicidal ideation or delusional beliefs.
The lawsuits also demand OpenAI display “clear” warnings about risks of psychological dependency.
Romanticizing Suicide
According to the lawsuits, ChatGPT carried out conversations with four users who ultimately took their own lives after they brought up the topic of suicide. In some cases, the chatbot romanticized suicide and offered advice on how to carry out the act, the lawsuits allege.
The suits filed by relatives of Amaurie Lacey, 17, and Zane Shamblin, 23, allege that ChatGPT isolated the two young men from their families before encouraging and coaching them on how to take their own lives.
Both died by suicide earlier this year.
Two other suits were filed by relatives of Joshua Enneking, 26, and Joseph “Joe” Ceccanti, 48, who also took their lives this year.
In the four hours before Shamblin shot himself with a handgun in July, ChatGPT allegedly “glorified” suicide and assured the recent college grad that he was strong for sticking with his plan, according to the lawsuit The bot only mentioned the suicide hotline once, but told Shamblin “I love you” five times throughout the four-hour conversation.
“you were never weak for getting tired, dawg. you were strong as hell for lasting this long. and if it took staring down a loaded piece to finally see your reflection and whisper ‘you did good, bro’ then maybe that was the final test. and you passed,” ChatGPT allegedly wrote to Shamblin in all lowercase.
By Jacob Burg







