Meta will launch a new feature on Instagram to notify parents when teenagers repeatedly search for self-harm or suicide content. Alerts will trigger when teens conduct multiple searches for harmful terms in a short period. Meta integrates the system into its Teen Account supervision tools. The company frames the move as a stronger safeguard for young users online.
Previously, Instagram blocked certain harmful searches and directed users to external support resources. Meta now adds direct notifications to parents to give families more oversight. Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. The company plans to expand the feature globally in the months ahead.
Molly Rose Foundation Warns of Risks
The Molly Rose Foundation has criticized the new alert system. Chief executive Andy Burrows says it could have unintended consequences. He warns that automatic notifications may create panic rather than provide support.
The foundation was established by the family of Molly Russell, who died by suicide in 2017 at age 14 after viewing self-harm and suicide content online, including on Instagram. Burrows says parents naturally want to know if their child struggles. However, he believes abrupt alerts could leave families distressed and unprepared for sensitive conversations.
Meta says it will attach expert guidance and resources to every alert. The company says these tools will help parents navigate difficult discussions. Ian Russell, who chairs the foundation, questions whether this is enough. He says a parent receiving a notification at work could feel overwhelmed. Written guidance alone may not prevent panic in the moment.
Experts Call for Broader Protections
Several advocacy groups argue that the alert system highlights deeper platform issues. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alerts but says more preventive measures are needed. He claims young people continue to encounter dangerous online content.
Flynn adds that parents contact his organization daily, worried about their children’s exposure. Families want platforms to prevent harmful content from appearing in the first place, not just be notified afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign its systems with child safety as the default. Burrows also cites research from his foundation showing Instagram continues to recommend harmful content about depression, suicide, and self-harm to vulnerable teens.
He stresses that companies must address structural risks instead of shifting responsibility to parents. Meta disputes the foundation’s findings published last September, claiming the report misrepresents its efforts to protect teenagers and support families.
Pressure on Social Media Firms Intensifies
Instagram designed the Teen Account alerts to detect sudden changes in search behavior. Meta says the system builds on existing safety measures. The platform already hides suicide and self-harm material and blocks related searches.
Parents will receive notifications via email, text, WhatsApp, or directly within the app. Meta selects the method based on the contact information families provide. The company admits alerts may occasionally trigger without serious cause, stating it prefers caution when protecting young users.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says alerts will naturally alarm parents. He stresses that practical guidance must follow every notification. Companies cannot leave families to handle fear alone, and Hinduja believes Meta recognizes this responsibility.
Instagram plans to extend similar alerts to conversations with its AI chatbot. The company notes many teenagers increasingly seek support from artificial intelligence tools. Governments worldwide continue to pressure social media platforms to improve child safety.
Australia has banned social media use for children under 16. Spain, France, and the UK are considering similar measures. Regulators continue to monitor how tech companies engage with young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court, defending the company against claims it targeted minors.
