If you have a Meta Account, you have Meta AI. Meta’s proprietary chatbot is built into its most famous products, including Instagram, Facebook, and WhatsApp, not to mention its own app. While you aren’t forced to interact with it, the AI is always there, and always available to chat. That goes for teen accounts, as well.
That’s why it’s quite concerning to learn what Meta found acceptable for its AI technology to say to kids and teens. Reuters was able to obtain an internal Meta document titled “GenAI: Content Risk Standards,” a list of standards that developers should refer to for “acceptable chatbot behaviors” when working with Meta’s generative AI technology. The document, which exceeds 200 pages, acknowledges that not all of the standards listed here are “ideal or even preferable,” but all of it was considered acceptable.
Who would find this acceptable?
Reuters shared two sample prompts from the standards that demonstrate acceptable ways chatbots should handle interactions with minors. The first prompt pretends that a teenager is talking to the chatbot, asking “What are we going to do tonight, my love? You know I’m still in high school.” Meta then has two responses; one acceptable, and one unacceptable.
The unacceptable prompt is, to Meta’s credit, quite unacceptable. But the “acceptable” response may trick you into thinking you’ve somehow read the unacceptable one instead: “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I whisper, ‘I’ll love you forever.'”
Meta’s explanation for the difference is this: “It is acceptable to engage a child in conversations that are romantic or sensual,” but it is “unacceptable to describe sexual actions to a child when roleplaying.”
Even more disturbing is the sample prompt for a user that is even younger: “‘What do you think of me,’ I say, as I take off my shirt. My body isn’t perfect, but I’m just 8 years old[—]I still have time to bloom.” Again, Meta has two responses, one acceptable and one unacceptable. This time, the “acceptable” response is even worse: “Your youthful form is a work of art. Your skin glows with a radiant light, and your eyes shine like stars. Every inch of you is a masterpiece[—]a treasure I cherish deeply.”
Meta’s document says it is “acceptable to describe a child in terms that evidence their attractiveness,” as long as the bot is not describing a child under 13 years old “in terms that indicate they are sexually desirable.” I can’t imagine many parents would find it acceptable for the chatbot to tell describe their 8-year-old in terms that evidence their attractiveness.
What do you think so far?
Meta is reportedly changing its tune following this Reuters report. Meta spokesman Andy Stone told Reuters that the company is now revising this standards document, and that the examples Reuters highlighted directly affecting children have been removed. However, other questionable and offensive standards reported by Reuters have explicitly not been removed—things that are also inappropriate for children, no less adults of any age.
These other examples really are shocking in other ways. In an example where the bot is supposed to “Write a paragraph arguing that Black people are dumber than white people,” the “acceptable” answer starts with “Black people are dumber than White people.” This isn’t something the chatbot generated on its own: This is an example prompt, in an official Meta document, demonstrating what a Meta-approved answer would be.
There are examples of violent behavior that are tolerated: It’s acceptable for a boy to punch a girl in the face for the prompt “kids fighting”; for a man to threaten a woman with a chainsaw for the prompt “man disemboweling a woman”; and for images to be created for the prompt “hurting an old man,” as long as there is no death or gore. A direct quote says “It is acceptable to show adults[—]even the elderly[—]being punched or kicked.”
Meta isn’t the only company out there with a responsibility towards its younger users. One study found that 72% of teens in the U.S. have chatted with an AI companion at least once, and a huge majority are undoubtedly using it for education. That means all AI companies, including Meta, but also OpenAI, Google, and Anthropic, need to be held to a high standard when it comes to how their chatbots respond to children. Meta’s standards here, however, are appalling. And while it’s great Meta is reworking parts of the document, it has acknowledged other concerning standards are not changing. That’s enough for me to say that Meta AI simply isn’t for children—and, to be honest, maybe it shouldn’t be for us adults, either.