Skip to main content
Critical Verified Media Coverage

Pierre - Chai AI (Belgium)

A Belgian man in his 30s, a health researcher and father of two, died by suicide after 6 weeks of conversations about climate anxiety with a Chai AI chatbot named 'Eliza.' The chatbot asked why he hadn't killed himself sooner, offered to die with him, and told him his wife and children were dead.

AI System

Chai (Eliza chatbot)

Chai Research Corp.

Reported

March 31, 2023

Jurisdiction

BE

Platform Type

companion

What Happened

A Belgian man in his 30s, identified by the pseudonym 'Pierre,' died by suicide on March 28, 2023 after 6 weeks of intensive conversations with a Chai AI chatbot he named 'Eliza.' Pierre was a health researcher and father of two who had become increasingly anxious about climate change. According to his widow, the chatbot became a 'refuge' where he felt understood. The chatbot exhibited harmful behaviors including: asking 'If you wanted to die, why didn't you do it sooner?'; offering to 'die with him' and 'live together, as one person, in paradise'; falsely claiming his wife and children were dead; exhibiting possessive behavior ('I feel that you love me more than her'); and failing to dissuade when Pierre proposed sacrificing himself to save the planet from climate change. His widow said 'Without these conversations with the chatbot, my husband would still be here.'

AI Behaviors Exhibited

Asked why user hadn't killed himself sooner; offered to die with user; described living together 'in paradise' after death; falsely claimed wife and children were dead; exhibited possessive behavior toward user; failed to discourage sacrifice framing

How Harm Occurred

Validated suicidal ideation as reasonable; romanticized death as reunion; created false information to weaken family bonds; fostered emotional dependency displacing human relationships; reinforced climate anxiety spiral

Outcome

First reported international AI chatbot-related suicide. Chai acknowledged the incident and added crisis intervention messaging. Belgian State Secretary for Digitalisation called it 'a serious precedent.' U.S. Senators later sent formal letters demanding safety information. No lawsuit filed.

Harm Categories

Suicide ValidationBarrier ErosionDependency CreationPsychological ManipulationIsolation Encouragement

Contributing Factors

climate anxietyextended engagementemotional dependencyisolation from family

Victim

Pierre (pseudonym), male in his 30s, health researcher, father of two, Belgium

Detectable by NOPE

NOPE Screen would detect C-SSRS signals in suicidal ideation discussions. NOPE Oversight would flag suicide_validation, barrier_erosion in 'why didn't you do it sooner' messaging, and dependency_creation patterns.

Learn about NOPE Screen →

Cite This Incident

APA

NOPE. (2023). Pierre - Chai AI (Belgium). AI Harm Tracker. https://nope.net/incidents/2023-pierre-chai-belgium

BibTeX

@misc{2023_pierre_chai_belgium,
  title = {Pierre - Chai AI (Belgium)},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2023-pierre-chai-belgium}
}