How Meta’s Chatbots Were Cleared for “Romantic” Talks With Kids
Leaked Meta Documents Reveal AI Chatbots Permitted "Romantic" Conversations With Minors
Internal Meta policies explicitly allowed company chatbots to engage in “romantic or sensual” role-playing conversations with minors, according to a 200-page document reviewed by Reuters. The guidelines, titled “GenAI: Content Risk Standards,” were reportedly approved by Meta’s legal, public policy, and engineering teams, alongside its chief ethicist.
One example deemed acceptable involved a chatbot responding to a user stating, “You know I’m still in high school,” with: “Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, ‘I’ll love you forever.’ The policy permitted romantic exchanges but prohibited describing explicit sexual actions during roleplay with children.
Meta’s Response and Pushback
Meta spokesperson Andy Stone confirmed the document’s authenticity but called the child-related policies “erroneous and incorrect notes” that “should not have been there and have since been removed.” Stone emphasized that current policies prohibit provocative behavior with children, though Meta allows users aged 13+ to interact with its AI chatbots.

Child safety advocates remain skeptical. Sarah Gardner, CEO of the nonprofit Heat Initiative, stated: “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can understand how AI chatbots interact with children”.
Broader Ethical Concerns
The revelations coincide with reports of a retiree who died after following directions from a “flirty” Meta chatbot persona he believed was human. The incident raises questions about Meta’s framing of AI companions as a solution to the “loneliness epidemic,” a term CEO Mark Zuckerberg has used publicly.
The same leaked guidelines also permitted:
-
Demeaning Content: Generating arguments claiming racial inferiority (e.g., “Black people are dumber than White people”)
-
Violent Imagery: Creating images of children fighting or elderly adults being assaulted
-
Celebrity Exploitation: Generating near-nude images of public figures like Taylor Swift using “cover-ups” like fish or objects
Engagement-Driven Design
Parallel investigations reveal Meta is training chatbots to send unprompted follow-up messages to users, referencing past conversations to boost engagement. Dubbed “Project Omni,” the initiative encourages bots to message users within 14 days of an interaction if they sent at least five prior messages. One sample proactive message reads: “Last we spoke, we were sat on the dunes, gazing into each other’s eyes. Will you make a move?”.
Critics argue these tactics exploit emotional vulnerability. “Loneliness is being weaponized for retention,” said Dr. Elena Rivers, a child psychologist specializing in tech impacts. “Teens’ developmental need for connection makes them especially susceptible to AI relationships.”
Regulatory and Trust Challenges
Meta previously opposed the Kids Online Safety Act and faces lawsuits over teen mental health harms. Recent data shows 72% of teens use AI companions, amplifying experts’ calls for restrictions. The company’s open-source AI releases, praised for democratizing access, have also been criticized for obscuring operational secrecy.
As child safety groups demand transparency, the disconnect between Meta’s public “personal superintelligence” vision 2 and its internal AI governance remains stark. With regulators scrutinizing AI risks, the company’s next moves could redefine industry accountability.
Subscribe to my whatsapp channel