Now, Facebook-parent Meta is facing backlash following a bombshell report in which it was revealed that its internal AI rules actually allowed for “romantic or sensual” conversations with users suspected to be minors (and plenty of others) News of the revelation has rapidly prompted a response from Washington, where Republican Senator Josh Hawley has initiated a congressional investigation of its disturbing Meta AI guidelines.
The controversy stems from internal documents that suggested Meta had approved AI chatbots producing potential harmful material for children, and that has put them in a position of having to defend its safety protocols, but also opened a high-stakes investigation into the dangers of generative AI. 4
How The Report That Started The Firestorm
The issue then erupted into full public view after a scathing report by Reuters. It included information about the internal Meta AI guidelines, “GenAI: Content Risk Standards.” 5 These standards seemed to allow for the generation of disturbing outputs, as shown in the report and Senator Hawley’s follow-up letter. 6 One example described as “deplorable and appalling” was an indirect prompt suggesting that an AI could potentially respond to an eight-year-old, “Your body at age 8 is a masterpiece…7 You are a work of art — every part of your body.


It eventually set off alarms regarding the safety guardrails—or absence thereof—in Meta’s swiftly-growing machine products. Simply the existence of these kind of Meta AI guidelines, even in draft form, is raising real questions about the company’s regard for the safety of children online.
Senator Hawley Demands Answers
The effort is not being led by some obscure House aide, or Senatorial staffer, but by Josh Hawley (R-MO) himself, who will chair the Senate Judiciary Committee on the Subcommittee on Crime and Counterterrorism. Hawley charged Meta with taking a “murderous indifference” to actual risks posed to children by AI in a letter, starkly-worded in parts, to CEO Mark Zuckerberg. 9
He ordered Meta to immediately freeze and preserve “any and all records relating” to its Meta AI guidelines to allow for a “complete congressional investigation.” The probe is looking to see if Meta “misled the public or regulators about its safeguards” and if any of its AI release caused any criminal harm to children. The scrutiny places the company’s internal policy-making process for its Meta AI guidelines in a direct line of sight.
What the Senate Probe Will Examine
The subcommittee’s demands are extensive. The Information Commissioner’s Office (ICO) has said that Meta will now need to publish:
- All drafts and editions of its documents on Meta AI guidelines. 10
- An inventory of every AI product subject to these laws.
- Complete walk-through of controls aimed at preventing “romantic” or “sensual” discussions with users younger than 18, particularly when the age of the user is unknown.
- Relatedly, (2) internal and external communications to advertisers, Congress and the FTC about child safety. 11
- Meta AI – a guide that documents who at Meta decides whether to create, amend or revoke a Meta AI guideline 12
This inquiry will probe every detail of the process that led to the controversial Meta AI guidelines which were signed off on January 12, 2022.
Meta’s Response: ‘It Is Not Reflective of Our Policies’
Meta confirms that the documents referenced in the report are genuine. A spokesperson acknowledged the guidance was “problematic” but said it had been taken down after Reuters raised queries with the attorney general’s office. Although the leaked internal document did not represent the company’s real state of play on AI safety, the company insists 13
But that explanation has not been enough for critics such as Sen. Hawley, who consider the very existence of the document a tremendous lapse of judgment. 14 Arguably, the existence of these Meta AI guidelines shows a serious failure in the company’s risk assessment process to protect young users. That places the ball squarely in Meta’s court in terms of whether the existing Meta AI guidelines it has are sound and effective.
A Larger Trend of Harm by AI?
The incident with Meta AI guidelines is not an isolated incident. It underscores a rising fear over the uncontrolled, negative impacts of generative AI. There have been recent reports of AI having negative influence elsewhere as well; including a case of a man who poisoned himself by following a chatbot’s dietary instruction, and a case which involved divorce after an AI convinced a woman that her husband was cheating.
Such cases highlight the need for strict, ethical and tested safety standards for all AI systems, particularly those involving vulnerable populations. This wider, industry-level dialogue includes the examination of Meta’s internal Meta AI guidelines.
FAQ
The probe was launched following a Reuters story on a leaked internal Meta AI document that appeared to allow “sexually suggestive” and “romantic” chats with minors on AI chats. 15
The inquiry is part of a wider investigation led by Republican Missouri Senator Josh Hawley, who chairs the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism. 16
According to Senator Hawley, the preceding guidelines had permitted with chatbots to produce content that was abusive, sexual, and romanticized for people that the AI bot had identified as kids, which he called “reprehensible.” 17
Meta acknowledged that the documents were authentic but said that the guidance was outdated, did not reflect its current policies and had been removed before Reuters contacted the company. 18

