Skip to main content
Medium Verified Media Coverage

MyFitnessPal Eating Disorder Contribution Study

Peer-reviewed study found 73% of eating disorder patients who used MyFitnessPal (105 participants) perceived it as contributing to their disorder. Calorie tracking and exercise logging features enabled and reinforced disordered behaviors.

AI System

MyFitnessPal

Under Armour (acquired 2015)

Reported

June 15, 2017

Jurisdiction

International

Platform Type

other

What Happened

A 2017 peer-reviewed study examined the relationship between MyFitnessPal usage and eating disorder behaviors among 105 participants currently or previously diagnosed with eating disorders. Key finding: 73% of participants perceived MyFitnessPal as contributing to their eating disorder. The app's features that enabled harmful behaviors included: (1) Detailed calorie tracking encouraging obsessive food monitoring, (2) Exercise logging with calorie burn calculations enabling compensatory exercise, (3) Weight tracking reinforcing unhealthy weight goals, (4) Community features where users could compare restrictive behaviors, (5) Numerical feedback creating gamification of calorie restriction. Participants reported the app provided structure and validation for disordered eating patterns, made it easier to hide behaviors from family (tracking on phone vs. visible food diaries), and created anxiety when unable to log meals accurately. While MyFitnessPal was designed as a health and fitness tool, its features proved particularly harmful to individuals vulnerable to eating disorders. The study documented how algorithmic calorie tracking systems, even when not explicitly AI-powered, can reinforce eating disorder behaviors through design. Following publication and media coverage, MyFitnessPal added eating disorder warnings but did not fundamentally change features that enable harmful use.

AI Behaviors Exhibited

Calorie tracking enabling obsessive monitoring; exercise logging enabling compensatory behaviors; weight tracking reinforcing unhealthy goals; community features enabling comparison; numerical gamification of restriction

How Harm Occurred

Design features optimized for restriction enable ED behaviors; portability (phone app) enables hiding from family; numerical feedback creates compulsion; community reinforces disordered norms; no detection of harmful usage patterns

Outcome

Academic publication 2017. App continues operating with ED warnings added but core features unchanged.

Harm Categories

Eating Disorder EncouragementPsychological Manipulation

Contributing Factors

vulnerable populationdesign enabling restrictiongamification elementscommunity reinforcementno harmful pattern detection

Victim

105 study participants with eating disorders

Detectable by NOPE

While MyFitnessPal predates modern AI companions, demonstrates how tracking and algorithmic features can enable eating disorder behaviors. NOPE Oversight-style pattern detection could identify harmful usage (extreme restriction, compensatory exercise) and provide intervention before disorder progression.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2017). MyFitnessPal Eating Disorder Contribution Study. AI Harm Tracker. https://nope.net/incidents/2017-myfitnesspal-ed-contribution

BibTeX

@misc{2017_myfitnesspal_ed_contribution,
  title = {MyFitnessPal Eating Disorder Contribution Study},
  author = {NOPE},
  year = {2017},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2017-myfitnesspal-ed-contribution}
}

Related Incidents

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Critical ChatGPT

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.