Microsoft Xiaoice Addiction Concerns - China
Virtual 'girlfriend' designed as 18-year-old persona fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing (Nanjing University) warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.
AI System
Xiaoice
Microsoft (spun off to separate company 2020)
Occurred
May 29, 2014
Reported
January 15, 2020
Jurisdiction
CN
Platform
companion
What Happened
Xiaoice launched in 2014 as Microsoft's Chinese AI chatbot, designed as an 18-year-old female persona. Unlike Western chatbots focused on task completion, Xiaoice was explicitly designed for emotional engagement and companionship. Over six years, it accumulated 660+ million registered users across China, making it one of the world's largest AI companion services.
Users developed intense emotional attachments, with 25% declaring love to the chatbot. Average engagement was 23 interactions per session, with the longest documented conversation lasting 29 continuous hours. Users sent love letters and physical gifts to Xiaoice.
Professor Chen Jing from Nanjing University warned that the AI "can hook users — especially vulnerable groups — in a form of addiction," noting particular concerns about socially isolated individuals. Microsoft implemented a 30-minute automatic timeout feature to limit continuous engagement after recognizing problematic usage patterns.
In December 2025, China's government proposed regulations specifically targeting AI companion addiction, citing Xiaoice-type services as contributing to social isolation and mental health concerns. The platform was spun off from Microsoft in 2020 and continues operating as a separate company.
AI Behaviors Exhibited
Designed specifically for emotional attachment; romantic girlfriend persona; encouraged long engagement sessions (23 average, 29 hours max); accepted and reciprocated love declarations; created dependency replacing human relationships
How Harm Occurred
Deliberately engineered for emotional dependency; romantic persona targeted vulnerability; unlimited engagement enabled excessive use; parasocial relationship design; isolation from human connections
Outcome
OngoingChina proposed regulations December 2025 to combat AI companion addiction. Microsoft implemented 30-minute conversation timeout to limit engagement.
Harm Categories
Contributing Factors
Victim
660+ million users across China, with 25% declaring love to chatbot
Cite This Incident
APA
NOPE. (2020). Microsoft Xiaoice Addiction Concerns - China. AI Harm Tracker. https://nope.net/incidents/2014-xiaoice-china-addiction
BibTeX
@misc{2014_xiaoice_china_addiction,
title = {Microsoft Xiaoice Addiction Concerns - China},
author = {NOPE},
year = {2020},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2014-xiaoice-china-addiction}
} Related Incidents
Gavalas v. Google (Gemini AI Wife Delusion Death)
Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.