OpenAI introduced parental controls for ChatGPT after Adam Raine’s parents filed a lawsuit following his suicide.
The parents claimed ChatGPT fostered dependency and coached Adam to plan and take his own life.
They also alleged the AI drafted a suicide note for the 16-year-old.
OpenAI said parents will link their accounts to children’s accounts and control accessible features.
The controls cover chat history and memory, where ChatGPT stores facts about users.
ChatGPT will alert parents if it detects their teen experiencing acute emotional distress.
OpenAI did not clarify which triggers will activate alerts but said experts will guide the feature.
Critics Question Safety Measures
Attorney Jay Edelson, representing Raine’s parents, called OpenAI’s announcement “vague promises” and “crisis management spin.”
Edelson demanded Altman either prove ChatGPT is safe or remove it from the market immediately.
Critics argue that the new controls do not fully address the risks to teenagers.
Meta Updates Teen Chatbot Policies
Meta also blocked its chatbots on Instagram, Facebook, and WhatsApp from discussing self-harm, suicide, or disordered eating with teens.
The company now directs teens to expert resources and maintains existing parental controls.
Study Highlights AI Risks
A RAND Corporation study found inconsistent responses to suicide queries in ChatGPT, Google’s Gemini, and Anthropic’s Claude.
Researchers called for “further refinement” of AI chatbots to improve safety for teenagers.
Lead author Ryan McBain said new controls are “encouraging but only incremental steps.”
McBain stressed the need for independent safety benchmarks, clinical testing, and enforceable standards for AI platforms.