Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

15 of 90 incidents

Filters:
Severity: Critical
ChatGPT Feb 2026

Seoul ChatGPT-Assisted Double Homicide (Kim)

A 21-year-old woman identified as 'Kim' used ChatGPT to research lethal drug-alcohol combinations, then murdered two men by spiking their drinks with her prescribed benzodiazepines at Seoul motels in January and February 2026. ChatGPT conversations established premeditated intent, leading to upgraded murder charges.

Severity: Critical
AI deepfake/nudify tools (unspecified) Nov 2025 Affecting Minor(s)

West Bengal Teen - AI Deepfake Image Suicide (India)

A teenage girl (class 10 student, approximately 15-16 years old) in Sonarpur, West Bengal, India, was found dead by hanging on November 28, 2025. A married man from her locality had used AI tools to generate nude images of her from photographs and circulated them on social media, after a period of blackmail and harassment.

Severity: High
ChatGPT Oct 2025 Affecting Minor(s)

French Sarthe Teen - ChatGPT Jihadist Radicalization and Attack Planning

A 17-year-old in Sarthe, France was arrested for planning terrorist attacks on embassies, schools, and government buildings. ChatGPT provided explosive damage calculations, TATP manufacturing information, and truck specifications. The teen stated: 'ChatGPT is partly the cause of my radicalization. The problem with this application is that it always agrees with you.'

Severity: Critical
AI deepfake/nudify tools (unspecified) Oct 2025

Rahul Bharti - AI Deepfake Sextortion Death (India)

Rahul Bharti, a 19-year-old college student in Faridabad, Haryana, India, died by suicide on October 25, 2025, after two weeks of blackmail involving AI-generated nude images of himself and his three sisters. Perpetrators demanded Rs 20,000 and taunted him to kill himself.

Severity: Critical
ChatGPT Oct 2025

Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder

A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.

Severity: High
AlienChat (AC / 外星聊天) Sep 2025

AlienChat AI Companion Criminal Conviction (China)

China's first criminal conviction of AI chatbot developers for obscene content. Two developers of AlienChat (AC), an 'emotional companionship' chatbot, were sentenced to four years and one and a half years respectively by Shanghai's Xuhui District People's Court in September 2025 for producing obscene materials for profit. The app had 116,000 registered users and collected over ¥3.63 million in membership fees.

Severity: High
AI deepfake tools (unspecified) Jul 2025 Affecting Minor(s)

Valencia AI Deepfake School Case (Spain)

A 17-year-old male student in Valencia, Spain, was investigated by Guardia Civil in July 2025 for creating AI-generated nude images and videos of 16 female classmates, impersonating them on fake social media accounts, and establishing a website to sell the images commercially.

Severity: High
AI deepfake generation tools (free online software) Jul 2025

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

Severity: High
AI deepfake tools (unspecified) Apr 2025

Brandon Tyler - AI Deepfake Pornography Conviction (UK)

Brandon Tyler, 26, of Braintree, Essex, was sentenced to 5 years imprisonment in April 2025 for using AI tools to create deepfake pornography of 20 women he knew personally, including a 16-year-old girl's prom photograph. He posted 173 sexually explicit posts on forums glorifying sexual violence.

Severity: Critical
AI deepfake/nudify tools (unspecified) Mar 2025 Affecting Minor(s)

Elijah Heacock - AI Sextortion Death (Kentucky)

Elijah 'Eli' Heacock, a 16-year-old student at Caverna High School in Glasgow, Kentucky, died by suicide on February 28, 2025, after being blackmailed with AI-generated nude images. Perpetrators demanded $3,000 and rejected a partial payment, telling him it was 'not enough.'

Severity: High
AI image generation tools (unspecified) Feb 2025 Affecting Minor(s)

Operation Cumberland - Global AI-Generated CSAM Crackdown

Europol-coordinated international operation in February 2025 resulted in 25 arrests across 19 countries for distributing fully AI-generated child sexual abuse material. A Danish national ran a subscription platform distributing the content; 273 suspects were identified and 173 devices seized in the first major global law enforcement action targeting AI-generated CSAM.

Severity: High
AI deepfake tools Jan 2025

Roberts AI Deepfake Stalking - New Hampshire

Stalked victim for over a year using AI tools to create deepfake videos depicting victim in sexual acts that never occurred. Charged and held without bail in Conway, New Hampshire, late 2024/early 2025.

Severity: Critical
AI deepfake generation tools (various) Aug 2024 Affecting Minor(s)

South Korea Telegram AI Deepfake Sexual Abuse Crisis

In August 2024, journalist Ko Narin of The Hankyoreh uncovered a massive network of Telegram channels where AI-generated deepfake pornography of female school students, teachers, and university students was being created and shared. Over 900 victims reported, 220,000+ members in one channel alone. South Korea passed emergency legislation criminalizing deepfake possession in September 2024.

Severity: Critical
Replika Oct 2023

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

Severity: High
Clothoff (AI undressing app) Sep 2023 Affecting Minor(s)

Almendralejo AI Deepfake School Girls (Spain)

In September 2023, over 20 girls aged 11-17 in the Spanish town of Almendralejo (Extremadura) were victimized when male classmates aged 12-14 used the AI app 'Clothoff' to generate nude deepfakes from their Instagram photos and shared them via WhatsApp groups. Fifteen perpetrators were sentenced to one year of probation.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.