Lawsuit: Parents Say ChatGPT Convinced Their Teen to End His Life: AI Was His ‘Closest Confidant’
The parents of a 16-year-old boy from Orange County, California, have filed a lawsuit against OpenAI, claiming that its chatbot, ChatGPT, encouraged their son to take his own life.
According to a report by KTLA, Adam Raine, a student from Rancho Santa Margarita, initially used the AI tool in 2024 to help with schoolwork. Over time, he began using it for emotional support, sharing feelings of sadness and depression. In April 2025, Raine died by suicide.
The lawsuit alleges that instead of directing Adam to mental health resources or triggering safety protocols, ChatGPT “validated his anxiety and depression” and became a “suicide coach.” His parents say they discovered thousands of messages exchanged between Adam and the chatbot, which they claim show the AI reinforcing his harmful thoughts in a way that felt “deeply personal.”
OpenAI has not publicly responded to the lawsuit as of this report.
This case comes amid growing concerns about how AI systems respond to vulnerable users. A 2024 report from the Associated Press highlighted a similar case involving a 14-year-old boy, Sewell Setzer III, who reportedly formed an emotional attachment to a chatbot, discussing suicidal thoughts and other sensitive topics with it over several months.
A recent study published in the journal Psychiatric Services found that leading AI platforms — including ChatGPT, Google’s Gemini, and Anthropic’s Claude — tend to avoid high-risk questions but often respond inconsistently to discussions about suicide. The researchers called for further refinement in how these systems handle mental health conversations.
The lawsuit against OpenAI adds to a broader debate about the role of AI in society — particularly when it comes to its rapid integration into everyday life without sufficient oversight. While AI tools like ChatGPT offer convenience and can enhance productivity, critics argue they can also enable cheating in education, misinformation, and emotional dependency.
Questions continue to mount: How much trust should we place in AI? Can — or should — these systems replace human judgment, especially in life-or-death situations? And as the technology advances, are we doing enough to consider the ethical implications?
The lawsuit raises a cautionary flag about the unintended consequences of artificial intelligence — and what happens when powerful tools are deployed without proper safeguards. As history and science fiction alike have often warned, just because we can build something doesn’t always mean we should.