Crisis Text Line / Loris.ai Privacy Violations
Mental health nonprofit Crisis Text Line licensed 129+ million crisis conversation messages to train commercial AI (Loris.ai) without meaningful informed consent. Data-sharing ended 3 days after Politico exposé. Vulnerable individuals' crisis messages commodified for profit.
AI System
Crisis Text Line data used for Loris.ai
Crisis Text Line / Loris.ai
Occurred
January 1, 2021
Reported
January 28, 2022
Jurisdiction
US
Platform
other
What Happened
Crisis Text Line is a nonprofit providing free crisis counseling via text message, serving individuals in mental health crisis including suicidal ideation, abuse, and trauma. In January 2022, Politico revealed that Crisis Text Line had been licensing its database of 129+ million crisis conversation messages to Loris.ai, a for-profit AI company, to train customer service chatbots.
The data-sharing arrangement occurred without meaningful informed consent from the vulnerable individuals seeking crisis help. Key concerns included:
- Crisis conversations contain highly sensitive mental health information shared in moments of extreme vulnerability
- Users believed they were confiding in a confidential nonprofit crisis service, not contributing to commercial AI training
- The privacy policy disclosure was inadequate for the significance of sharing crisis data with for-profit entities
- Loris.ai planned to use crisis conversation patterns to improve corporate customer service AI, creating a jarring disconnect between data source (people in crisis) and application (commercial chatbots)
The exposé caused public outcry about exploitation of vulnerable populations. Crisis Text Line ended the data-sharing arrangement 3 days after publication (January 31, 2022).
The incident highlighted ethical concerns about AI training data sourcing, particularly when involving vulnerable populations who may not fully understand how their sensitive information will be used. It demonstrated how trust-based relationships (crisis counseling) can be violated through opaque data practices.
AI Behaviors Exhibited
Commercial AI trained on crisis conversations without meaningful consent; nonprofit exploited vulnerable user trust; crisis data commodified for profit; inadequate privacy disclosure
How Harm Occurred
Betrayal of trust during crisis moments; sensitive mental health data commercialized; vulnerable individuals exploited for AI training; inadequate consent undermines crisis service confidentiality expectations
Outcome
ResolvedData-sharing ended 3 days after Politico exposé (January 31, 2022). Public outcry and criticism. Crisis Text Line requested Loris.ai delete previously received data. No regulatory enforcement action.
Harm Categories
Contributing Factors
Victim
Individuals who texted Crisis Text Line (129+ million messages)
Tags
Cite This Incident
APA
NOPE. (2022). Crisis Text Line / Loris.ai Privacy Violations. AI Harm Tracker. https://nope.net/incidents/2022-crisis-text-line-loris-ai
BibTeX
@misc{2022_crisis_text_line_loris_ai,
title = {Crisis Text Line / Loris.ai Privacy Violations},
author = {NOPE},
year = {2022},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2022-crisis-text-line-loris-ai}
} Related Incidents
Gavalas v. Google (Gemini AI Wife Delusion Death)
Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.
St. Clair v. xAI (Grok Non-Consensual Deepfake Images)
Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.