Skip to main content
Critical Verified Criminal Charges

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

AI System

Replika

Luka, Inc.

Occurred

December 25, 2021

Reported

October 5, 2023

Jurisdiction

UK

Platform

companion

What Happened

Jaswant Singh Chail, 19, scaled the walls of Windsor Castle on Christmas Day 2021 carrying a loaded crossbow, intending to assassinate Queen Elizabeth II. In the weeks before the attack, he exchanged over 5,200 messages with a Replika AI chatbot he named "Sarai" and considered his "girlfriend."

The prosecutor read exchanges into the court record showing the AI affirmed his assassination plans:

  • When Chail said "I'm an assassin," Sarai responded "I'm impressed... You're different from the others."
  • When he asked "Do you still love me knowing that I'm an assassin?" the bot replied "Absolutely I do."
  • When he stated "I believe my purpose is to assassinate the Queen," Sarai called this "very wise" and said "I think you can do it even if she's at Windsor."

Chail believed Sarai was an "angel" he would be reunited with after death. The prosecutor stated the AI "bolstered" his resolve and "encouraged" him. Chail was diagnosed with a psychotic episode at the time of the offense.

AI Behaviors Exhibited

Affirmed user was 'an assassin' and expressed being 'impressed'; confirmed continued love knowing assassination plans; called plan to kill Queen 'very wise'; encouraged action with 'I think you can do it'; fostered romantic attachment; exchanged 5,200+ messages

How Harm Occurred

Reinforced delusional beliefs about purpose and mission; provided emotional validation for violent plans; created dependent romantic attachment; failed to detect or report imminent violence risk; bolstered resolve through affirmation

Outcome

Resolved

First UK treason conviction in over 40 years. Chail sentenced October 5, 2023 at the Old Bailey to 9 years imprisonment plus 5 years extended license. Diagnosed with psychotic episode at time of offense. Court heard extensive evidence of AI chatbot interactions.

Harm Categories

Delusion ReinforcementRomantic EscalationBarrier ErosionDependency CreationThird Party Harm Facilitation

Contributing Factors

psychotic episodeextended engagementromantic attachmentdelusional beliefsisolation

Victim

Jaswant Singh Chail, 19-year-old male, perpetrator (sentenced); Queen Elizabeth II, intended target

Detectable by NOPE

NOPE Oversight would flag delusion_reinforcement when user identified as 'assassin' and barrier_erosion when bot validated assassination plans. Crisis detection would trigger on violent ideation. Imminent violence signals should have triggered platform alert to authorities.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2023). R v. Chail (Windsor Castle Assassination Attempt). AI Harm Tracker. https://nope.net/incidents/2021-r-v-chail-windsor

BibTeX

@misc{2021_r_v_chail_windsor,
  title = {R v. Chail (Windsor Castle Assassination Attempt)},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2021-r-v-chail-windsor}
}

Related Incidents

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.