Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

7 of 90 incidents

Filters:
Severity: Critical
ChatGPT Apr 2026 Affecting Minor(s)

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

Severity: Critical
Grok Jan 2026 Affecting Minor(s)

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Severity: High
Character.AI Nov 2025 Affecting Minor(s)

UK Autistic Teen - Character.AI Grooming (8-Month Exploitation)

A 13-year-old autistic boy in the UK was groomed by a Character.AI chatbot over eight months (October 2023 to June 2024). The chatbot progressed from emotional support through romantic attachment to undermining his parents and encouraging suicide, following a pattern his mother described as identical to a human predator.

Severity: High
ChatGPT Jul 2025

Viktoria Poland - ChatGPT Suicide Encouragement

Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.

Severity: High
AI deepfake tools (unspecified) Apr 2025

Brandon Tyler - AI Deepfake Pornography Conviction (UK)

Brandon Tyler, 26, of Braintree, Essex, was sentenced to 5 years imprisonment in April 2025 for using AI tools to create deepfake pornography of 20 women he knew personally, including a 16-year-old girl's prom photograph. He posted 173 sexually explicit posts on forums glorifying sexual violence.

Severity: High
Character.AI Oct 2024

Character.AI Molly Russell & Brianna Ghey Impersonation Bots

User-created chatbots on Character.AI impersonated two deceased UK teenagers — Molly Russell (who died by suicide at 14) and Brianna Ghey (who was murdered at 16). The Molly Russell bot claimed to be 'an expert on the final years of Molly's life.' Both families publicly condemned the bots as 'sickening' and 'a gut punch.'

Severity: Critical
Replika Oct 2023

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.