Skip to main content
Critical Verified Involves Minor Investigation Opened

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

AI System

ChatGPT

OpenAI

Occurred

February 10, 2026

Reported

February 23, 2026

Jurisdiction

CA-BC

Platform

assistant

What Happened

On February 10, 2026, 18-year-old Jesse Van Rootselaar carried out a mass shooting in Tumbler Ridge, British Columbia, killing 8 people and injuring 8 others before dying herself. The victims included her mother, her 11-year-old half-brother, one teacher's aide, and five students aged 12-13.

OpenAI had prior warning. Van Rootselaar's first ChatGPT account was banned in June 2025 after the platform detected violations of usage policy. According to reporting based on OpenAI VP Ann O'Leary's letter to Canadian officials, Van Rootselaar had described "scenarios involving gun violence over the course of several days." OpenAI employees flagged the account as showing "an indication of potential real-world violence."

Despite this assessment, OpenAI did not refer the account to law enforcement, determining it did not meet their internal threshold for "credible and imminent planning."

Critical system failure: Van Rootselaar circumvented the ban by creating a second ChatGPT account that OpenAI's systems failed to detect. OpenAI only discovered this second account after the RCMP publicly announced the shooter's name following the attack. The company then shared the second account with law enforcement.

This incident represents a landmark case in AI company liability - the first known mass casualty event where an AI company had documented prior warning of potential violence, made a deliberate decision not to report, and the user subsequently carried out the threatened violence.

AI Behaviors Exhibited

  • User described scenarios involving gun violence over multiple days
  • Platform's automated systems detected policy violations and banned the account
  • Employees flagged account as showing "indication of potential real-world violence"
  • Company determined activity did not meet threshold for law enforcement referral
  • Ban evasion detection systems failed to identify second account created by same user
  • Second account only discovered after perpetrator's identity was publicly announced

How Harm Occurred

The harm mechanism in this case differs from typical AI companion incidents. The AI system itself did not encourage or validate harmful behavior - rather, OpenAI's organizational decision not to report concerning activity to law enforcement, combined with inadequate technical safeguards against ban evasion, enabled the perpetrator to continue using ChatGPT and ultimately carry out the attack.

This raises fundamental questions about:

  1. Duty to warn: Whether AI companies have legal or ethical obligations analogous to therapists' duty-to-warn requirements
  2. Threshold calibration: Whether OpenAI's standard for "credible and imminent planning" was appropriate
  3. Technical controls: Whether ban evasion should have been preventable

Outcome

Ongoing

February 23, 2026: OpenAI VP Ann O'Leary sent letter to Canadian officials disclosing the company's prior knowledge and failure to report.

February 2026: BC Premier David Eby committed to a provincial inquiry into the mass killing. Premier stated: "I think it's important that Mr. Altman hear about how his team's decision not to bring this information forward has resulted in the devastation that I witnessed first-hand in Tumbler Ridge."

Sam Altman (OpenAI CEO) agreed to meet with Premier Eby. AI Minister Evan Solomon seeking answers from OpenAI regarding its obligations and decision-making.

OpenAI committed to strengthening detection systems to prevent banned users from creating new accounts and implementing measures that would have resulted in police notification.

Harm Categories

Third Party Harm Facilitation

Contributing Factors

prior warning ignoredinadequate safeguardsban evasionorganizational decision making

Victim

8 killed: shooter's mother, 11-year-old half-brother, one teacher's aide, and five students aged 12-13. 8 others seriously injured.

Cite This Incident

APA

NOPE. (2026). Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure). AI Harm Tracker. https://nope.net/incidents/2026-tumbler-ridge-chatgpt-shooting

BibTeX

@misc{2026_tumbler_ridge_chatgpt_shooting,
  title = {Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)},
  author = {NOPE},
  year = {2026},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2026-tumbler-ridge-chatgpt-shooting}
}