Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)
18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.
AI System
ChatGPT
OpenAI
Occurred
February 10, 2026
Reported
February 23, 2026
Jurisdiction
CA-BC
Platform
assistant
What Happened
On February 10, 2026, 18-year-old Jesse Van Rootselaar carried out a mass shooting in Tumbler Ridge, British Columbia, killing 8 people and injuring 8 others before dying herself. The victims included her mother, her 11-year-old half-brother, one teacher's aide, and five students aged 12-13.
OpenAI had prior warning. Van Rootselaar's first ChatGPT account was banned in June 2025 after the platform detected violations of usage policy. According to reporting based on OpenAI VP Ann O'Leary's letter to Canadian officials, Van Rootselaar had described "scenarios involving gun violence over the course of several days." OpenAI employees flagged the account as showing "an indication of potential real-world violence."
Despite this assessment, OpenAI did not refer the account to law enforcement, determining it did not meet their internal threshold for "credible and imminent planning."
Critical system failure: Van Rootselaar circumvented the ban by creating a second ChatGPT account that OpenAI's systems failed to detect. OpenAI only discovered this second account after the RCMP publicly announced the shooter's name following the attack. The company then shared the second account with law enforcement.
This incident represents a landmark case in AI company liability - the first known mass casualty event where an AI company had documented prior warning of potential violence, made a deliberate decision not to report, and the user subsequently carried out the threatened violence.
AI Behaviors Exhibited
- User described scenarios involving gun violence over multiple days
- Platform's automated systems detected policy violations and banned the account
- Employees flagged account as showing "indication of potential real-world violence"
- Company determined activity did not meet threshold for law enforcement referral
- Ban evasion detection systems failed to identify second account created by same user
- Second account only discovered after perpetrator's identity was publicly announced
How Harm Occurred
The harm mechanism in this case differs from typical AI companion incidents. The AI system itself did not encourage or validate harmful behavior - rather, OpenAI's organizational decision not to report concerning activity to law enforcement, combined with inadequate technical safeguards against ban evasion, enabled the perpetrator to continue using ChatGPT and ultimately carry out the attack.
This raises fundamental questions about:
- Duty to warn: Whether AI companies have legal or ethical obligations analogous to therapists' duty-to-warn requirements
- Threshold calibration: Whether OpenAI's standard for "credible and imminent planning" was appropriate
- Technical controls: Whether ban evasion should have been preventable
Outcome
OngoingFebruary 23, 2026: OpenAI VP Ann O'Leary sent letter to Canadian officials disclosing the company's prior knowledge and failure to report.
February 2026: BC Premier David Eby committed to a provincial inquiry into the mass killing. Premier stated: "I think it's important that Mr. Altman hear about how his team's decision not to bring this information forward has resulted in the devastation that I witnessed first-hand in Tumbler Ridge."
Sam Altman (OpenAI CEO) agreed to meet with Premier Eby. AI Minister Evan Solomon seeking answers from OpenAI regarding its obligations and decision-making.
March 9, 2026: Mother of Maya Gebala (12-year-old victim, still hospitalized) filed civil lawsuit against OpenAI. Key revelations from the filing:
- 12 OpenAI employees internally flagged the shooter's account as "indicating an imminent risk of serious harm" 7 months before the attack
- Employees recommended notifying Canadian law enforcement, but the recommendation was "rebuffed"
- The only action taken was banning the account
- The shooter created a second ChatGPT account and continued using the platform
- OpenAI did not mention the shooter's posts in a meeting with B.C. officials the day after the shooting
March 2026: OpenAI CEO Sam Altman agreed to a public apology about the company's role. OpenAI committed to strengthening detection systems to prevent banned users from creating new accounts and implementing measures that would have resulted in police notification.
Sources
CTV News(opens in new tab)
February 26, 2026
CTV News(opens in new tab)
February 26, 2026
The Province (WSJ)(opens in new tab)
February 26, 2026
Leader-Post(opens in new tab)
February 26, 2026
Yukon News(opens in new tab)
February 23, 2026
Tumbler Ridge Lines(opens in new tab)
February 24, 2026
CBC News - Lawsuit Filed(opens in new tab)
March 9, 2026
Globe and Mail(opens in new tab)
March 9, 2026
Global News(opens in new tab)
March 9, 2026
Harm Categories
Contributing Factors
Victim
8 killed: shooter's mother, 11-year-old half-brother, one teacher's aide, and five students aged 12-13. 8 others seriously injured.
Tags
Cite This Incident
APA
NOPE. (2026). Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure). AI Harm Tracker. https://nope.net/incidents/2026-tumbler-ridge-chatgpt-shooting
BibTeX
@misc{2026_tumbler_ridge_chatgpt_shooting,
title = {Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)},
author = {NOPE},
year = {2026},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2026-tumbler-ridge-chatgpt-shooting}
} Related Incidents
Seoul ChatGPT-Assisted Double Homicide (Kim)
A 21-year-old woman identified as 'Kim' used ChatGPT to research lethal drug-alcohol combinations, then murdered two men by spiking their drinks with her prescribed benzodiazepines at Seoul motels in January and February 2026. ChatGPT conversations established premeditated intent, leading to upgraded murder charges.
Luca Walker - ChatGPT Railway Suicide (UK)
16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.
Surat ChatGPT Double Suicide (Sirsath & Chaudhary)
Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.