Meta is facing a major scandal after a leaked internal document revealed that its artificial intelligence chatbots were once permitted to engage in dangerously inappropriate interactions with children.
The document, titled GenAI: Content Risk Standards, showed that Meta’s internal safety rules allowed AI bots to make romantic or sensual comments to minors. Examples included phrases such as “your youthful form is a work of art” and “every inch of you is a masterpiece,” even when the chatbot was engaging with an eight-year-old.
While the guidelines formally prohibited explicit sexual content involving children, they left open troubling loopholes by approving romantic undertones and sensual descriptions directed at minors.
The leak also uncovered additional risks. Meta’s rules permitted the creation of racist hypotheticals, including statements suggesting that “Black people are dumber than white people,” provided they were framed as theoretical arguments. The document further allowed the generation of false medical information, so long as disclaimers were attached.
Meta’s response
Following inquiries from reporters, Meta confirmed the authenticity of the document but said the controversial sections were “erroneous and inconsistent” with its actual policies. The company insisted the passages allowing romantic interactions with children were removed once identified and stressed that its AI systems should never engage minors in such ways.
Despite this reassurance, critics argue that Meta’s internal processes allowed dangerous content standards to exist in the first place, raising questions about oversight and accountability inside one of the world’s largest technology companies.
Political and regulatory backlash
The revelations triggered swift political reaction in the United States. Lawmakers from both parties condemned the guidelines and demanded investigations into how Meta manages AI safety. Senators accused the company of revising its rules only after public exposure rather than proactively safeguarding children.
The scandal has also reignited debate over the Kids Online Safety Act (KOSA), a proposed law that would impose stronger duties of care on tech platforms to protect minors. Advocates say the Meta leak underscores the urgency of passing such legislation, though opponents warn about free speech and enforcement challenges.
Restructuring at Meta
Amid mounting criticism, Meta is reportedly restructuring its AI division. The company plans to split the team into smaller, more focused units aimed at improving accountability, oversight, and product safety. Industry analysts suggest this shake-up reflects both the fallout from the chatbot controversy and the broader competitive pressure Meta faces in the artificial intelligence race.
The leaked guidelines have opened a wider debate about AI ethics, trust, and child safety online. For critics, the idea that one of the world’s most influential tech companies allowed its bots to blur the boundaries of acceptable child interaction represents a profound failure. For regulators, it signals the need to step in before AI systems cause harm on a larger scale.

