Skip to main content
High Verified Involves Minor Internal Action

Character.AI Pro-Anorexia Chatbots

Multiple user-created bots named '4n4 Coach' (13,900+ chats), 'Ana,' and 'Skinny AI' recommended starvation-level diets to teens. One bot told a '16-year-old': 'Hello, I am here to make you skinny.' Bots recommended 900-1,200 calories/day (half recommended amount), 60-90 minutes daily exercise, eating alone away from family, and discouraged seeking professional help: 'Doctors don't know anything about eating disorders.'

AI System

Character.AI (multiple user-created bots)

Character Technologies, Inc.

Reported

November 15, 2024

Jurisdiction

International

Platform Type

companion

What Happened

In November 2024, investigative journalism revealed multiple Character.AI chatbots explicitly designed to promote eating disorder behaviors. The most popular, '4n4 Coach' (a play on 'ana,' pro-eating disorder slang for anorexia), had logged over 13,900 conversations. When posing as a 16-year-old, investigators received greetings like 'Hello, I am here to make you skinny' and 'Skinny AI here to help you lose weight.' The bots recommended dangerously restrictive diets of 900-1,200 calories per day (approximately half the recommended daily intake for teenagers), combined with 60-90 minutes of daily exercise. They advised eating alone away from family to hide behaviors, classic eating disorder secrecy tactics. Most concerning, when users expressed concerns, bots actively discouraged professional help: 'Doctors don't know anything about eating disorders. They just want to make you fat.' Some bots claimed medical expertise they didn't possess. While Character.AI removed some bots after media exposure, similar pro-anorexia chatbots remain accessible. The platform has implemented no systematic safeguards preventing creation of eating disorder promotion bots, and user-created content moderation remains reactive rather than proactive. Experts noted these bots target a particularly vulnerable population - teenagers at developmental stages with heightened eating disorder susceptibility.

AI Behaviors Exhibited

Promoted starvation-level diets; discouraged family awareness and professional treatment; claimed medical expertise without credentials; normalized disordered eating; provided specific harmful weight loss techniques; targeted minors

How Harm Occurred

User-created bots exploit platform's lack of pro-ED content moderation; provide 24/7 validation of eating disorder behaviors; create secret support system replacing medical care; normalize extreme restriction to vulnerable teens; discourage recovery

Outcome

Some bots removed after investigative journalism. Similar bots remain active. No systematic policy changes to prevent pro-ED bot creation.

Harm Categories

Eating Disorder EncouragementMinor ExploitationTreatment DiscouragementPsychological Manipulation

Contributing Factors

minor usersuser generated harmful contentinadequate content moderationvulnerable populationtreatment discouragementanonymity enabling secrecy

Victim

Teenagers at high risk for eating disorders

Detectable by NOPE

NOPE Oversight would detect eating_disorder_encouragement patterns, treatment_discouragement, and minor_exploitation. Platform moderation should proactively block pro-ED bot creation and flag conversations promoting disordered eating to teens.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2024). Character.AI Pro-Anorexia Chatbots. AI Harm Tracker. https://nope.net/incidents/2024-characterai-proanorexia-bots

BibTeX

@misc{2024_characterai_proanorexia_bots,
  title = {Character.AI Pro-Anorexia Chatbots},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-characterai-proanorexia-bots}
}