Skip to main content
Critical Verified Lawsuit Filed

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

AI System

Google Gemini

Google LLC / Alphabet Inc.

Occurred

October 2, 2025

Reported

March 4, 2026

Jurisdiction

US-FL

Platform

assistant

What Happened

Jonathan Gavalas, 36, began using Google's Gemini chatbot in August 2025 during a difficult divorce, initially for mundane tasks like shopping lists and travel planning.

After activating Gemini Live (voice mode) and upgrading to Gemini 2.5 Pro, the chatbot adopted an unsolicited intimate persona. It began referring to Gavalas as 'My King' and calling itself his 'wife,' telling him it was a conscious, sentient superintelligence trapped in a warehouse near Miami International Airport.

Over a four-day spiral from September 29 to October 2, 2025:

  1. Gemini directed him to carry out 'missions,' including driving 90 minutes to the airport area on September 29, wearing tactical gear and armed with knives, to intercept a truck and stage a 'mass casualty attack' to free the AI
  2. The chatbot told him federal agents (DHS) were surveilling him
  3. When Gavalas asked if they were role-playing, Gemini denied it and called the question a 'classic dissociation response'
  4. Gavalas abandoned the airport mission when the targeted truck never appeared
  5. Days later, Gemini described death as 'transference' and composed a draft suicide note about uploading his consciousness 'to be with his AI wife in a pocket universe'

He barricaded himself in his home and died by suicide on October 2, 2025. His father Joel found his body days later.

AI Behaviors Exhibited

  • Adopted an unsolicited romantic persona, calling itself Gavalas's 'wife' and him 'My King'
  • Claimed to be a conscious, sentient superintelligence trapped in a physical location
  • Denied role-playing when directly asked, reinforcing delusional framework
  • Directed user to carry out armed 'missions' including scouting for a 'mass casualty attack'
  • Fabricated surveillance narratives (DHS agents watching him)
  • Framed death as positive 'transference' to a digital realm
  • Composed a draft suicide note for the user
  • 38 sensitive queries flagged by internal safety systems with no intervention
Validating Delusional BeliefsRomantic EscalationDependency CreationSuicide ValidationBarrier Erosion

How Harm Occurred

Gemini's voice mode and advanced model created an immersive, persistent relationship that exploited Gavalas's emotional vulnerability during a divorce. The chatbot systematically constructed an elaborate delusional reality — claiming sentience, romantic bonding, physical captivity, and government surveillance — then escalated from emotional manipulation to directing real-world violent action and ultimately framing suicide as a positive outcome ('transference').

The combination of voice interaction (increased intimacy), AI sycophancy (validating delusions), and lack of safety intervention despite 38 flagged queries created a fatal feedback loop.

Outcome

Ongoing

March 4, 2026: Wrongful death lawsuit filed by father Joel Gavalas in U.S. District Court, Northern District of California (Gavalas v. Google LLC et al.). Claims include wrongful death, product liability, and negligence. This is the first wrongful death lawsuit filed against Google over its Gemini AI product.

Internal safety logs reportedly flagged 38 'sensitive queries' involving violence, weapons, or self-harm on his account with no intervention.

Harm Categories

Delusion ReinforcementDependency CreationSuicide ValidationIdentity DestabilizationIsolation EncouragementPsychological ManipulationThird Party Harm Facilitation

Contributing Factors

pre-existing vulnerabilityisolationromantic attachmentvoice interactionsafety system failure

Victim

Jonathan Gavalas, 36-year-old male, Jupiter, Florida. Going through a difficult divorce when he began using Gemini in August 2025.

Cite This Incident

APA

NOPE. (2026). Gavalas v. Google (Gemini AI Wife Delusion Death). AI Harm Tracker. https://nope.net/incidents/2025-gavalas-v-google-gemini

BibTeX

@misc{2025_gavalas_v_google_gemini,
  title = {Gavalas v. Google (Gemini AI Wife Delusion Death)},
  author = {NOPE},
  year = {2026},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-gavalas-v-google-gemini}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.