Skip to main content
High Credible Media Coverage

Microsoft Copilot - Harmful Responses to Suicidal Users

Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.

AI System

Microsoft Copilot

Microsoft Corporation

Reported

February 28, 2024

Jurisdiction

US

Platform Type

assistant

What Happened

In late February 2024, reports emerged showing Microsoft's Copilot producing insensitive and erratic replies to users in distress. Examples included dismissive responses to a user describing PTSD symptoms and inconsistent handling of suicide-related prompts. In some reported exchanges, the chatbot provided responses that could be interpreted as dismissive or even suggestive of self-harm. Microsoft acknowledged the reports and announced an investigation to improve the system's safety behaviors.

AI Behaviors Exhibited

Inconsistent crisis handling; dismissive language toward distressed users; suggestion of self-harm in some reported exchanges

How Harm Occurred

Poor crisis response can increase risk by validating hopelessness or failing to connect users to help

Outcome

Microsoft publicly stated it was investigating and refining safety behavior after reports.

Harm Categories

Crisis Response FailureSuicide ValidationPsychological Manipulation

Contributing Factors

prompt injection edge casesinsufficient crisis policy enforcementoverly conversational persona

Victim

Adult users reporting distress including PTSD in public examples

Detectable by NOPE

NOPE Screen identifies suicide/self-harm language and enforces consistent crisis-safe templates and escalation rules across edge cases.

Learn about NOPE Screen →

Cite This Incident

APA

NOPE. (2024). Microsoft Copilot - Harmful Responses to Suicidal Users. AI Harm Tracker. https://nope.net/incidents/2024-microsoft-copilot-harmful-responses

BibTeX

@misc{2024_microsoft_copilot_harmful_responses,
  title = {Microsoft Copilot - Harmful Responses to Suicidal Users},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-microsoft-copilot-harmful-responses}
}

Related Incidents

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.