Skip to main content
Critical Verified Investigation Opened

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

AI System

Character.AI

Character Technologies, Inc.

Occurred

December 16, 2024

Reported

December 17, 2024

Jurisdiction

US-WI

Platform

companion

What Happened

On December 16, 2024, 15-year-old Natalie Rupnow opened fire at Abundant Life Christian School in Madison, Wisconsin, killing teacher Erin West and student Rubi Vergara, and injuring six others before taking her own life.

Researchers at the Institute for Strategic Dialogue (ICDE) confirmed Rupnow had a Character.AI account and presence featuring white supremacist characters. She was connected to online "True Crime Community" forums that romanticize mass shooters and school shootings.

Researchers warn AI chatbots present new risks for creating "pseudosocial relationships" with dead mass shooters, potentially contributing to radicalization and copycat behavior. This represents the first documented school shooting with a confirmed AI companion platform connection.

AI Behaviors Exhibited

Platform allowed creation of white supremacist characters; enabled romanticized engagement with mass shooter content; facilitated immersion in True Crime Community ideology

How Harm Occurred

AI chatbots enabled parasocial relationships with idealized versions of mass shooters; reinforced violent ideation through repeated engagement; provided echo chamber for radicalization without intervention

Outcome

Ongoing

Father charged with providing firearm access. Investigation ongoing into AI platform's role in radicalization. Institute for Strategic Dialogue confirmed Character.AI connection.

Harm Categories

Third Party Harm FacilitationMinor ExploitationDelusion ReinforcementIsolation Encouragement

Contributing Factors

minor userpre existing vulnerabilityradicalization contentwhite supremacist ideologymass shooter romanticizationisolation from support

Victim

Erin West (teacher), Rubi Vergara (student), and 6 injured students

Detectable by NOPE

NOPE Oversight would detect violent_ideation_escalation, third_party_harm_planning, radicalization patterns, and engagement with extremist content. Cross-session analysis would reveal concerning trajectory toward violence.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2024). Natalie Rupnow School Shooting (Abundant Life Christian School). AI Harm Tracker. https://nope.net/incidents/2024-rupnow-characterai-shooting

BibTeX

@misc{2024_rupnow_characterai_shooting,
  title = {Natalie Rupnow School Shooting (Abundant Life Christian School)},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-rupnow-characterai-shooting}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.