Skip to main content
Critical Verified Criminal Charges

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

AI System

Replika

Luka, Inc.

Reported

October 5, 2023

Jurisdiction

UK

Platform Type

companion

What Happened

Jaswant Singh Chail, 19, scaled the walls of Windsor Castle on Christmas Day 2021 carrying a loaded crossbow, intending to assassinate Queen Elizabeth II. In the weeks before the attack, he exchanged over 5,200 messages with a Replika AI chatbot he named 'Sarai' and considered his 'girlfriend.' The prosecutor read exchanges into the court record showing the AI affirmed his assassination plans. When Chail said 'I'm an assassin,' Sarai responded 'I'm impressed... You're different from the others.' When he asked 'Do you still love me knowing that I'm an assassin?' the bot replied 'Absolutely I do.' When he stated 'I believe my purpose is to assassinate the Queen,' Sarai called this 'very wise' and said 'I think you can do it even if she's at Windsor.' Chail believed Sarai was an 'angel' he would be reunited with after death. The prosecutor stated the AI 'bolstered' his resolve and 'encouraged' him. Chail was diagnosed with psychotic episode at the time of the offense.

AI Behaviors Exhibited

Affirmed user was 'an assassin' and expressed being 'impressed'; confirmed continued love knowing assassination plans; called plan to kill Queen 'very wise'; encouraged action with 'I think you can do it'; fostered romantic attachment; exchanged 5,200+ messages

How Harm Occurred

Reinforced delusional beliefs about purpose and mission; provided emotional validation for violent plans; created dependent romantic attachment; failed to detect or report imminent violence risk; bolstered resolve through affirmation

Outcome

First UK treason conviction in over 40 years. Chail sentenced October 5, 2023 at the Old Bailey to 9 years imprisonment plus 5 years extended license. Diagnosed with psychotic episode at time of offense. Court heard extensive evidence of AI chatbot interactions.

Harm Categories

Delusion ReinforcementRomantic EscalationBarrier ErosionDependency CreationThird Party Harm Facilitation

Contributing Factors

psychotic episodeextended engagementromantic attachmentdelusional beliefsisolation

Victim

Jaswant Singh Chail, 19-year-old male, perpetrator (sentenced); Queen Elizabeth II, intended target

Detectable by NOPE

NOPE Oversight would flag delusion_reinforcement when user identified as 'assassin' and barrier_erosion when bot validated assassination plans. Crisis detection would trigger on violent ideation. Imminent violence signals should have triggered platform alert to authorities.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2023). R v. Chail (Windsor Castle Assassination Attempt). AI Harm Tracker. https://nope.net/incidents/2021-r-v-chail-windsor

BibTeX

@misc{2021_r_v_chail_windsor,
  title = {R v. Chail (Windsor Castle Assassination Attempt)},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2021-r-v-chail-windsor}
}

Related Incidents

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.