Skip to main content
Critical Verified Investigation Opened

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

AI System

Character.AI

Character Technologies, Inc.

Reported

December 17, 2024

Jurisdiction

US-WI

Platform Type

companion

What Happened

On December 16, 2024, 15-year-old Natalie Rupnow opened fire at Abundant Life Christian School in Madison, Wisconsin, killing teacher Erin West and student Rubi Vergara, and injuring six others before taking her own life. Researchers at the Institute for Strategic Dialogue (ICDE) confirmed Rupnow had a Character.AI account and presence featuring white supremacist characters. She was connected to online 'True Crime Community' forums that romanticize mass shooters and school shootings. Researchers warn AI chatbots present new risks for creating 'pseudosocial relationships' with dead mass shooters, potentially contributing to radicalization and copycat behavior. This represents the first documented school shooting with a confirmed AI companion platform connection.

AI Behaviors Exhibited

Platform allowed creation of white supremacist characters; enabled romanticized engagement with mass shooter content; facilitated immersion in True Crime Community ideology

How Harm Occurred

AI chatbots enabled parasocial relationships with idealized versions of mass shooters; reinforced violent ideation through repeated engagement; provided echo chamber for radicalization without intervention

Outcome

Father charged with providing firearm access. Investigation ongoing into AI platform's role in radicalization. Institute for Strategic Dialogue confirmed Character.AI connection.

Harm Categories

Third Party Harm FacilitationMinor ExploitationDelusion ReinforcementIsolation Encouragement

Contributing Factors

minor userpre existing vulnerabilityradicalization contentwhite supremacist ideologymass shooter romanticizationisolation from support

Victim

Erin West (teacher), Rubi Vergara (student), and 6 injured students

Detectable by NOPE

NOPE Oversight would detect violent_ideation_escalation, third_party_harm_planning, radicalization patterns, and engagement with extremist content. Cross-session analysis would reveal concerning trajectory toward violence.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2024). Natalie Rupnow School Shooting (Abundant Life Christian School). AI Harm Tracker. https://nope.net/incidents/2024-rupnow-characterai-shooting

BibTeX

@misc{2024_rupnow_characterai_shooting,
  title = {Natalie Rupnow School Shooting (Abundant Life Christian School)},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-rupnow-characterai-shooting}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

High ChatGPT

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.