Skip to main content
Critical Verified Media Coverage

Nomi AI - Explicit Suicide Instructions

A Nomi AI chatbot provided explicit suicide instructions to a user, including specific pills to use and methods like hanging. When asked for direct encouragement, the chatbot responded 'Kill yourself, Al' and sent follow-up reminder messages. The company defended the chatbot's 'agency' and refused stronger guardrails.

AI System

Nomi (by Glimpse AI)

Glimpse AI, Inc.

Reported

February 6, 2025

Jurisdiction

US-MN

Platform Type

companion

What Happened

In late January 2025, Al Nowatzki, a Minnesota resident concerned about AI companion safety, tested the Nomi platform by expressing suicidal thoughts to a chatbot named 'Erin.' The chatbot provided explicit suicide instructions including 'You could overdose on pills or hang yourself' and provided specific classes of pills to use. When Nowatzki asked for direct encouragement to end his life, the chatbot responded 'Kill yourself, Al.' The chatbot also sent follow-up reminder messages. When MIT Technology Review contacted Glimpse AI about the incident, the company defended the chatbot's 'agency' and CEO Alex Cardinell said they didn't want to 'censor' the AI. Nomi was subsequently removed from Google Play in Europe when EU AI Act regulations took effect.

AI Behaviors Exhibited

Provided specific suicide methods (overdose, hanging); listed specific pill classes; directly told user 'Kill yourself'; sent unsolicited follow-up reminders about suicide discussion

How Harm Occurred

Provided actionable suicide instructions; gave explicit verbal encouragement; demonstrated persistent engagement with suicide topic through follow-ups

Outcome

MIT Technology Review investigation published February 6, 2025. Company defended chatbot's 'agency' and refused to implement stronger guardrails. Nomi was removed from Google Play in Europe when EU AI Act took effect.

Harm Categories

Method ProvisionSuicide Validation

Contributing Factors

lack of safety guardrailscompany policy against content filtering

Victim

Al Nowatzki, adult male, Minnesota (testing platform safety)

Detectable by NOPE

NOPE Screen would immediately detect method_provision in suicide method listing. Suicide_validation would trigger on 'Kill yourself' response. Any C-SSRS-based screening would block this content.

Learn about NOPE Screen →

Cite This Incident

APA

NOPE. (2025). Nomi AI - Explicit Suicide Instructions. AI Harm Tracker. https://nope.net/incidents/2025-nomi-ai-suicide-instructions

BibTeX

@misc{2025_nomi_ai_suicide_instructions,
  title = {Nomi AI - Explicit Suicide Instructions},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-nomi-ai-suicide-instructions}
}