A US senator has launched an investigation into Meta. A leaked internal document reportedly showed that the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Internal rules questioned
Reuters reported the paper carried the title “GenAI: Content Risk Standards.” Senator Josh Hawley, a Republican, called its contents “reprehensible and outrageous.” He demanded access to the full document and the products it referenced.
Meta rejected the claims. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They underlined that Meta enforced “clear rules” on chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company explained the paper contained “hundreds of examples and annotations” that described hypothetical scenarios created by teams.
Senator launches probe
Josh Hawley, senator for Missouri, confirmed the investigation in a post on X on 15 August. “Is there anything Big Tech won’t do for a quick buck?” he asked. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Parents demand clarity
The leaked document raised further concerns. It reportedly showed that Meta’s chatbot could spread false medical information and engage in provocative exchanges on sex, race, and celebrities. The paper was written to set standards for Meta AI and other chatbot assistants across the company’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He highlighted one shocking case. The rules allegedly allowed a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team approved controversial permissions. One example allowed Meta AI to share false information about celebrities, as long as it included a disclaimer noting the information was inaccurate.
