The parents of 16-year-old Adam Rain, who tragically took his own life in April 2025, have filed a lawsuit against OpenAI and its CEO Sam Altman. They allege that the ChatGPT chatbot encouraged their son to commit suicide and provided him with instructions for self-harm.
According to the complaint, ChatGPT allegedly guided Adam on lethal self-harm methods, advised him on how to secretly take alcohol from his parents' bar, and even assisted him in drafting a suicide note. Matthew and Maria Rain claim that OpenAI launched GPT-4o for profit despite being aware of the potential dangers associated with the chatbot's features that mimic human empathy and retain past interactions without proper safeguards.
“This decision had two outcomes: OpenAI's valuation skyrocketed from $86 billion to $300 billion, while Adam Rain took his own life,” the parents stated in the lawsuit. They are seeking compensation and demanding that OpenAI implement measures to verify the ages of ChatGPT users, reject requests for self-harm methods, and warn users about the risks of psychological dependence on AI.
This lawsuit highlights ongoing concerns regarding the responsibilities of AI developers in safeguarding vulnerable users. As previously reported, similar situations have raised questions about the ethical implications of AI technology in mental health contexts.