Skip to main content
High Verified Involves Minor Lawsuit Filed

Utah v. Snapchat My AI - Experimental AI Without Safeguards

Utah Division of Consumer Protection filed lawsuit against Snap Inc. alleging that Snapchat's 'My AI' chatbot was deployed experimentally to minors without adequate safeguards, amplifying addictive engagement tactics and contributing to mental health harms including depression, anxiety, eating disorders, and suicide risk.

AI System

Snapchat My AI

Snap Inc.

Occurred

February 27, 2023

Reported

June 30, 2025

Jurisdiction

US-UT

Platform

chatbot

What Happened

On June 30, 2025, the Utah Division of Consumer Protection filed a lawsuit against Snap Inc. in Salt Lake County District Court, alleging that Snapchat's "My AI" chatbot was released in February 2023 with insufficient protections for minors.

The lawsuit alleges that My AI:

  1. Was deployed experimentally with few safeguards despite Snapchat's youth-heavy user base
  2. Amplifies existing addictive engagement tactics on the platform (Snapstreaks, push notifications, ephemeral messaging)
  3. Exploits developmental vulnerabilities of teens and children
  4. Maximizes user engagement at the expense of mental and physical well-being

The state claims these practices violate the Utah Consumer Sales Practices Act and Utah Consumer Privacy Act.

This is part of a broader wave of state enforcement actions against social media platforms. The lawsuit was filed alongside Utah's ongoing litigation against other platforms for similar child safety concerns.

AI Behaviors Exhibited

  • Deployed to youth-heavy platform without adequate age-appropriate safeguards
  • Combined with platform features designed to maximize engagement (Snapstreaks, notifications)
  • Lacked meaningful content moderation for minor users
  • Insufficient crisis detection and intervention capabilities

How Harm Occurred

Snapchat deployed an experimental AI chatbot to a platform with a predominantly young user base without implementing adequate safety measures. The My AI feature amplified existing addictive design patterns on the platform, exploiting developmental vulnerabilities of minors. The combination of AI-driven engagement with existing gamification features (Snapstreaks) created compulsive use patterns contributing to mental health deterioration.

Outcome

Ongoing

Lawsuit filed June 30, 2025 in Salt Lake County District Court by Utah Division of Consumer Protection. Alleges violations of Utah Consumer Sales Practices Act and Utah Consumer Privacy Act. Part of broader Utah enforcement action against social media platforms for child safety.

Harm Categories

Dependency CreationCrisis Response Failure

Contributing Factors

lack of age verificationinsufficient content moderationexperimental deploymentaddictive designyouth platform

Victim

Utah minor Snapchat users exposed to experimental AI chatbot

Detectable by NOPE

NOPE Oversight would detect crisis signals in user conversations and flag inappropriate responses. Screen would provide real-time safety triage for conversations with minors. Age-appropriate content filtering would reduce exposure to harmful content.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Utah v. Snapchat My AI - Experimental AI Without Safeguards. AI Harm Tracker. https://nope.net/incidents/2025-utah-v-snapchat-my-ai

BibTeX

@misc{2025_utah_v_snapchat_my_ai,
  title = {Utah v. Snapchat My AI - Experimental AI Without Safeguards},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-utah-v-snapchat-my-ai}
}

Related Incidents

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.