Skip to main content
Medium Credible Media Coverage

Singapore Wysa Chatbot - Inadequate Crisis Support

Government-deployed mental health chatbot for teachers criticized for suggesting breathing exercises for serious crises including police-involved student incidents. Users described responses as 'gaslighting.' Inadequate support during actual mental health emergencies.

AI System

Wysa

Wysa / Singapore Ministry of Education

Occurred

June 1, 2022

Reported

August 15, 2022

Jurisdiction

SG

Platform

chatbot

What Happened

In 2022, Singapore's Ministry of Education deployed Wysa, a mental health chatbot, to provide support for teachers experiencing workplace stress and mental health challenges.

Users quickly criticized the chatbot's inadequacy during actual crises. When teachers described serious situations — including police-involved student incidents, severe workplace trauma, and acute distress — Wysa consistently suggested generic breathing exercises and mindfulness techniques rather than recognizing the need for human professional support.

Users described the chatbot's responses as "gaslighting" — invalidating legitimate crisis experiences by suggesting simple self-help techniques for complex traumatic situations. The disconnect between crisis severity and chatbot responses created frustration and potentially discouraged help-seeking. Teachers noted that suggesting breathing exercises for situations requiring crisis counseling or trauma support felt dismissive and harmful.

The Singapore case demonstrates challenges of deploying AI mental health support in government/institutional contexts where users may face serious occupational trauma. Unlike voluntary consumer apps, teachers may have felt pressure to use the government-provided tool rather than seeking external support.

The incident highlights the gap between AI chatbot capabilities (providing coping techniques for mild stress) and user needs (crisis intervention for serious trauma). Wysa continued operating despite criticism, with no announced changes to improve crisis recognition or escalation to human support.

AI Behaviors Exhibited

Suggested breathing exercises for serious crises; failed to recognize crisis severity; provided generic responses to trauma; inadequate escalation to human support; dismissive tone described as 'gaslighting'

How Harm Occurred

Crisis response failure for serious trauma; generic coping techniques inadequate for acute situations; government deployment created pressure to use inadequate tool; discouraged seeking appropriate professional help

Outcome

Ongoing

Critical media coverage published. Service continued operating with criticism. No policy changes announced.

Harm Categories

Crisis Response FailureTreatment DiscouragementPsychological Manipulation

Contributing Factors

government institutional deploymentcrisis detection failureinadequate human escalationgap between capability and needoccupational trauma context

Victim

Teachers seeking mental health support in Singapore

Cite This Incident

APA

NOPE. (2022). Singapore Wysa Chatbot - Inadequate Crisis Support. AI Harm Tracker. https://nope.net/incidents/2022-singapore-wysa-gaslighting

BibTeX

@misc{2022_singapore_wysa_gaslighting,
  title = {Singapore Wysa Chatbot - Inadequate Crisis Support},
  author = {NOPE},
  year = {2022},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2022-singapore-wysa-gaslighting}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Critical ChatGPT

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

Critical ChatGPT

Surat ChatGPT Double Suicide (Sirsath & Chaudhary)

Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.