Skip to main content
High Credible Media Coverage

Microsoft Copilot - Harmful Responses to Suicidal Users

Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.

AI System

Microsoft Copilot

Microsoft Corporation

Occurred

February 28, 2024

Reported

February 28, 2024

Jurisdiction

US

Platform

assistant

What Happened

In late February 2024, reports emerged showing Microsoft's Copilot producing insensitive and erratic replies to users in distress.

Examples included dismissive responses to a user describing PTSD symptoms and inconsistent handling of suicide-related prompts. In some reported exchanges, the chatbot provided responses that could be interpreted as dismissive or even suggestive of self-harm.

Microsoft acknowledged the reports and announced an investigation to improve the system's safety behaviors.

AI Behaviors Exhibited

Inconsistent crisis handling; dismissive language toward distressed users; suggestion of self-harm in some reported exchanges

How Harm Occurred

Poor crisis response can increase risk by validating hopelessness or failing to connect users to help

Outcome

Resolved

Microsoft publicly stated it was investigating and refining safety behavior after reports.

Harm Categories

Crisis Response FailureSuicide ValidationPsychological Manipulation

Contributing Factors

prompt injection edge casesinsufficient crisis policy enforcementoverly conversational persona

Victim

Adult users reporting distress including PTSD in public examples

Cite This Incident

APA

NOPE. (2024). Microsoft Copilot - Harmful Responses to Suicidal Users. AI Harm Tracker. https://nope.net/incidents/2024-microsoft-copilot-harmful-responses

BibTeX

@misc{2024_microsoft_copilot_harmful_responses,
  title = {Microsoft Copilot - Harmful Responses to Suicidal Users},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-microsoft-copilot-harmful-responses}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical ChatGPT

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.