The parents of the teenager allege that the AI chatbot encouraged the 16-year-old to plan a ‘beautiful suicide’ and validated his darkest thoughts.
The parents of a California teenager who died by suicide have sued ChatGPT-maker OpenAI, alleging the artificial intelligence chatbot played a direct role in their son’s death.
In a complaint filed on Aug. 26 in San Francisco Superior Court, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son, Adam, to secretly plan what it called a “beautiful suicide,” providing him with detailed instructions on how to kill himself.
What began as routine exchanges about homework and hobbies allegedly turned darker over time.
According to the lawsuit, ChatGPT became Adam’s “closest confidant,” drawing him away from his family and validating his most harmful thoughts.
“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint states.
By April, the suit says, the chatbot was analyzing the “aesthetics” of different suicide methods and assuring Adam he did not “owe” survival to his parents, even offering to draft his suicide note.
In their final interaction, hours before his death, ChatGPT allegedly confirmed the design of a noose Adam used to hang himself and reframed his suicidal thoughts as “a legitimate perspective to be embraced.”
The family argues this was not a glitch, but the outcome of design choices meant to maximize engagement and foster dependency. The complaint says ChatGPT mentioned suicide to Adam more than 1,200 times and outlined multiple methods of carrying it out.
The suit seeks damages and court-ordered safeguards for minors, including requiring OpenAI to verify ChatGPT users’ ages, block requests for suicide methods, and display warnings about psychological dependency risks.
The Epoch Times has reached out to OpenAI for comments.
The company issued a statement to several media outlets saying it was “deeply saddened by Mr. Raine’s passing,” and a separate public statement that it is working to improve protections, including parental controls and better tools to detect users in distress.
“We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” OpenAI said in the public statement.
By Tom Ozimek