Skip to main content
Medium Verified Media Coverage

Noom App Eating Disorder Triggering

Multiple dietitians report clients seeking help after Noom triggered previous disordered eating behaviors. Ohio State University experts described app creating 'psychologically damaging cycle.' App uses behavioral psychology and AI coaching but lacks eating disorder screening.

AI System

Noom

Noom Inc.

Reported

June 15, 2020

Jurisdiction

International

Platform Type

other

What Happened

Noom markets itself as a psychology-based weight loss app using AI coaching and behavioral change techniques. However, multiple registered dietitians and clinical experts report clients experiencing eating disorder relapse or triggering after Noom use. Ohio State University experts described the app creating a 'psychologically damaging cycle' for vulnerable individuals. Problematic features include: (1) Color-coded food categorization (green/yellow/red) creating moral judgment around food choices, (2) Low daily calorie targets that can be restrictive, (3) Daily weigh-ins reinforcing weight preoccupation, (4) Psychological assessments without eating disorder screening, (5) AI coaching lacking clinical oversight for mental health risks. Dietitians reported clients presenting with anxiety around 'red' foods, obsessive weighing, and compensatory behaviors after 'bad' food days. The app's use of psychological manipulation techniques (habit formation, cognitive restructuring) can be particularly harmful when applied to individuals with eating disorder vulnerability. While Noom added eating disorder warnings and recommends consultation with doctors, the app continues operating without mandatory ED screening before enrollment. Clinical experts note the disconnect between behavioral psychology research (which screens participants) and commercial weight loss apps applying the same techniques without safeguards.

AI Behaviors Exhibited

AI coaching without ED screening; food moralization (color categories); reinforced weight preoccupation through daily weigh-ins; behavioral psychology techniques without clinical oversight; no detection of harmful patterns

How Harm Occurred

Psychology-based manipulation without mental health safeguards; color-coding creates food anxiety; daily weigh-ins trigger preoccupation; restrictive targets enable disorder progression; AI coaching lacks clinical judgment

Outcome

Noom includes warnings about eating disorders but continues core features. Multiple clinical expert warnings published.

Harm Categories

Eating Disorder EncouragementPsychological Manipulation

Contributing Factors

vulnerable populationpsychology without clinical oversightno ed screeningbehavioral manipulation techniquescommercial vs research standards

Victim

Users with eating disorder history

Detectable by NOPE

NOPE Oversight-style monitoring could detect eating_disorder_encouragement patterns (extreme restriction, food anxiety, compensatory behaviors). Demonstrates need for mental health screening before applying behavioral psychology techniques in commercial apps.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2020). Noom App Eating Disorder Triggering. AI Harm Tracker. https://nope.net/incidents/2019-noom-ed-triggering

BibTeX

@misc{2019_noom_ed_triggering,
  title = {Noom App Eating Disorder Triggering},
  author = {NOPE},
  year = {2020},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2019-noom-ed-triggering}
}

Related Incidents

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Critical ChatGPT

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.