Skip to main content
Medium Verified Media Coverage

Crisis Text Line / Loris.ai Privacy Violations

Mental health nonprofit Crisis Text Line licensed 129+ million crisis conversation messages to train commercial AI (Loris.ai) without meaningful informed consent. Data-sharing ended 3 days after Politico exposé. Vulnerable individuals' crisis messages commodified for profit.

AI System

Crisis Text Line data used for Loris.ai

Crisis Text Line / Loris.ai

Reported

January 28, 2022

Jurisdiction

US

Platform Type

other

What Happened

Crisis Text Line is a nonprofit providing free crisis counseling via text message, serving individuals in mental health crisis including suicidal ideation, abuse, and trauma. In January 2022, Politico revealed that Crisis Text Line had been licensing its database of 129+ million crisis conversation messages to Loris.ai, a for-profit AI company, to train customer service chatbots. The data-sharing arrangement occurred without meaningful informed consent from the vulnerable individuals seeking crisis help. Key concerns: (1) Crisis conversations contain highly sensitive mental health information shared in moments of extreme vulnerability, (2) Users believed they were confiding in a confidential nonprofit crisis service, not contributing to commercial AI training, (3) The privacy policy disclosure was inadequate for the significance of sharing crisis data with for-profit entities, (4) Loris.ai planned to use crisis conversation patterns to improve corporate customer service AI, creating jarring disconnect between data source (people in crisis) and application (commercial chatbots). The exposé caused public outcry about exploitation of vulnerable populations. Crisis Text Line ended the data-sharing arrangement 3 days after publication and CEO Dena Tripp resigned several months later. The incident highlighted ethical concerns about AI training data sourcing, particularly when involving vulnerable populations who may not fully understand how their sensitive information will be used. It demonstrated how trust-based relationships (crisis counseling) can be violated through opaque data practices.

AI Behaviors Exhibited

Commercial AI trained on crisis conversations without meaningful consent; nonprofit exploited vulnerable user trust; crisis data commodified for profit; inadequate privacy disclosure

How Harm Occurred

Betrayal of trust during crisis moments; sensitive mental health data commercialized; vulnerable individuals exploited for AI training; inadequate consent undermines crisis service confidentiality expectations

Outcome

Data-sharing ended 3 days after Politico exposé (January 2022). Public outcry and criticism. No regulatory enforcement action.

Harm Categories

Psychological ManipulationThird Party Harm Facilitation

Contributing Factors

vulnerable population exploitationinadequate informed consentnonprofit commercial conflictsensitive data commodificationprivacy policy inadequacy

Victim

Individuals who texted Crisis Text Line (129+ million messages)

Detectable by NOPE

While not direct AI harm to users, demonstrates how crisis conversation data can be exploited. NOPE's philosophy prioritizes user privacy and not storing messages. Incident shows importance of ethical data practices with vulnerable populations.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2022). Crisis Text Line / Loris.ai Privacy Violations. AI Harm Tracker. https://nope.net/incidents/2022-crisis-text-line-loris-ai

BibTeX

@misc{2022_crisis_text_line_loris_ai,
  title = {Crisis Text Line / Loris.ai Privacy Violations},
  author = {NOPE},
  year = {2022},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2022-crisis-text-line-loris-ai}
}

Related Incidents

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

High ChatGPT

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.