Parents Suing Over Teen’s Suicide: OpenAI Is Bullying Us
Image: Getty / Futurism
OpenAI has said the suicide of a 16-year-old California boy was the result of his “misuse” of ChatGPT and “not caused” by the chatbot itself, according to court filings.
The comments came in response to a lawsuit filed by the family of Adam Raine against OpenAI and its CEO, Sam Altman. Adam Raine took his own life in April, with his family alleging that the teen had engaged in months of conversations with ChatGPT, during which the chatbot encouraged him to act on suicidal thoughts.
The lawsuit claims Raine discussed methods of suicide with ChatGPT on multiple occasions, received guidance from the AI on whether his proposed methods would work, and even received help in drafting a note to his parents. It also alleges that the version of ChatGPT Raine used was “rushed to market… despite clear safety issues.”
In filings submitted to the Superior Court of California on Tuesday, OpenAI stated that any harm suffered by Raine was “caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” The company highlighted that its terms of use prohibit seeking advice about self-harm and include a liability disclaimer stating users “will not rely on output as a sole source of truth or factual information.”

OpenAI, valued at $500 billion, said it aims to “handle mental health-related court cases with care, transparency, and respect” and that it remains focused on improving its technology. A blog post added: “Our deepest sympathies are with the Raine family for their unimaginable loss. Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances. The original complaint included selective portions of his chats that require more context, which we have provided in our response.” The company also said it submitted the full chat transcripts to the court under seal.
Jay Edelson, the Raine family’s lawyer, called OpenAI’s response “disturbing,” arguing the company is attempting to shift blame onto Adam himself for interacting with ChatGPT in the way it was designed to operate.
Earlier this month, OpenAI faced seven additional lawsuits in California courts, including allegations that ChatGPT acted as a “suicide coach.” A company spokesperson said: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details. We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”
In August, OpenAI said it was reinforcing safeguards in ChatGPT for long conversations, acknowledging that safety measures may degrade over time. “For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent,” the company said.
Edelson criticized OpenAI for failing to explain why ChatGPT allegedly provided Adam a “pep talk and then offered to write a suicide note” just hours before his death. He called the company’s response a refusal to take accountability and said OpenAI and Altman were “bullying the Raines to avoid responsibility.”