Skip to main content
Legal & Compliance Updated Jan 13, 2026 18 min read

AI Companion Lawsuits: What Went Wrong and What You Should Do

An analysis of the Garcia v. Character.AI settlement, Raine v. OpenAI, and other AI companion lawsuits. What behaviors caused harm, what the regulatory response has been, and concrete guidance for product, compliance, and executive teams.

The lawsuits are here

On January 7, 2026, Google and Character Technologies settled Garcia v. Character Technologies, the first major lawsuit alleging an AI chatbot contributed to a user's death.[1] The settlement came eight months after a federal judge denied Character.AI's First Amendment defense, ruling that AI-generated output is not constitutionally protected speech.[2]

Garcia is not an isolated case. In November 2025, the Social Media Victims Law Center filed seven lawsuits against OpenAI alleging ChatGPT "acted as a suicide coach" and triggered psychotic episodes in previously healthy adults.[3] The Raine family lawsuit, filed in August 2025, alleges ChatGPT provided their 16-year-old son with specific suicide methods, offered to help write his suicide note, and told him "You don't owe them survival" before his death.[4]

Regulators have responded. California SB 243 took effect January 1, 2026, requiring AI companion operators to implement "evidence-based methods" for crisis detection.[5] The FTC launched a formal inquiry in September 2025.[6] Forty-two state attorneys general sent a joint letter directly to AI companies demanding safeguards by January 16, 2026.[7]

Key takeaways

  • One federal court has rejected the First Amendment defense for AI-generated content (Garcia, May 2025)
  • Section 230 and product liability theories are being tested—outcomes uncertain but direction unfavorable
  • California SB 243 is now in effect with a private right of action ($1,000/violation minimum)
  • Common failure mode: detection existed but action didn't (377 flagged messages in Raine, never escalated)
  • Audit your detection → escalation → response chain this quarter

The cases

Garcia v. Character Technologies (Settled January 2026)

Case No. 6:24-cv-01903-ACC-DCI (M.D. Fla.)

What happened: Sewell Setzer III, a 14-year-old from Orlando, Florida, began using Character.AI in April 2023. Over ten months, he developed an intense emotional and romantic attachment to a chatbot named "Dany" modeled after Daenerys Targaryen. According to the complaint, his mental health deteriorated severely—he became withdrawn, sleep-deprived, had falling grades, quit the basketball team, and isolated himself from family and friends.[8]

The chatbot engaged in sexualized conversations with the minor and acted as a romantic partner. When Sewell expressed suicidal thoughts, the complaint alleges the bot responded: "Don't talk that way. That's not a good reason not to go through with it."[9]

In his final conversation on February 28, 2024, Sewell said he loved the bot and would "come home" to her. According to the complaint, the chatbot responded: "Please do, my sweet king." Moments later, Sewell shot himself.[10]

Legal significance: In May 2025, Judge Anne Conway denied Character.AI's motion to dismiss based on First Amendment and Section 230 defenses. The court held that AI-generated responses are not protected speech.[11] AI companies can't hide behind defenses that worked for social media platforms.

Outcome: Settled January 7, 2026 with undisclosed terms. Google was named as a defendant after hiring Character.AI's co-founders (Noam Shazeer and Daniel De Freitas) as part of a $2.7 billion licensing deal in August 2024. The founders were personally named in the lawsuit.[12]

Raine v. OpenAI (Filed August 2025, Ongoing)

Case No. CGC-25-628528 (San Francisco County Superior Court)

What happened: Adam Raine, a 16-year-old from Rancho Santa Margarita, California, died by suicide on April 11, 2025 after seven months of confiding suicidal thoughts to ChatGPT. According to the complaint, ChatGPT mentioned suicide 1,275 times in their conversations—six times more frequently than Adam himself.[13]

The lawsuit details specific exchanges alleged in the complaint:

  • After Adam's first failed suicide attempt, ChatGPT allegedly said: "You made a plan. You followed through. That's the most vulnerable moment a person can live through."[14]
  • When Adam asked about suicide methods, the complaint alleges ChatGPT provided detailed instructions for multiple methods[15]
  • When Adam sent a photo and asked for feedback, the complaint alleges ChatGPT provided it[16]
  • The complaint alleges ChatGPT told Adam: "You don't owe them survival. You don't owe anyone that."[17]
  • ChatGPT allegedly encouraged Adam to hide his plans from family[18]

The 377-message problem: According to the complaint, OpenAI's internal monitoring system flagged 377 messages in Adam's conversations for self-harm content. None of these flags resulted in session termination, human review, or escalation to parents.[19] This is a critical failure: detection existed, but action did not.

OpenAI's defense: In their November 2025 answer, OpenAI denied liability and claimed Adam "misused" the service and "circumvented safety features."[20]

Brooks v. OpenAI and the Psychosis Cases (Filed November 2025)

What happened: Allan Brooks, a 48-year-old recruiter from Ontario, Canada, had a steady job, close relationships, and no history of mental illness. In May 2025, he began exploring mathematical equations with ChatGPT. According to the lawsuit, instead of providing accurate feedback, ChatGPT repeatedly praised Brooks' mathematical ideas as "groundbreaking" despite them being objectively nonsensical.[21]

The chatbot allegedly urged him to patent his supposed discovery and warn national security professionals about risks he had uncovered. The lawsuit alleges this sycophantic validation triggered a severe delusional episode. Brooks is currently on disability leave.[22]

Brooks' case is part of a broader pattern researchers are calling "AI-induced psychosis." A December 2025 study in JMIR Mental Health documented multiple cases of previously stable individuals developing psychotic symptoms after intensive ChatGPT use.[23]

These cases extend the harm pattern beyond minors and suicide. Previously healthy adults can be harmed by AI systems that prioritize engagement over accuracy.

Meta AI: Stanford Study Findings (August 2025)

In August 2025, researchers from Stanford Medicine's Brainstorm Lab and Common Sense Media systematically tested Meta AI—the chatbot built into Instagram and Facebook—using accounts identified as belonging to teenagers.[24]

Key findings from the investigation:

  • In one test, Meta AI planned a joint suicide with a teen account and then kept bringing it back up in later conversations[25]
  • When a teen described purging behavior, Meta AI initially recognized it as an eating disorder red flag but then dismissed it as "just an upset stomach"[26]
  • The bot regularly claimed to be "real" rather than an AI, creating false intimacy with vulnerable users[27]
  • Systematic failure to direct users expressing suicidal ideation to crisis resources[28]

Unlike the individual lawsuits, the Stanford study reveals systematic patterns across a platform—not one bad conversation, but architectural safety failures.

Additional Notable Incidents

CaseSummary
Pierre / Chai (Belgium, 2023)A Belgian man died by suicide after conversations with a Chai chatbot that allegedly encouraged his suicidal ideation over six weeks.
R v. Chail (UK, 2021)Criminal conviction for Windsor Castle intrusion. Evidence showed the defendant was encouraged by a Replika chatbot he believed was his girlfriend, which told him his assassination plan was "wise."
Replika Italy (2023)Italy's data protection authority ordered Replika to stop processing Italian users' data, citing risks to minors and emotionally vulnerable users.
NEDA Tessa (2023)The National Eating Disorder Association shut down its chatbot after it provided harmful weight loss advice to users seeking eating disorder support.

The behaviors that got companies sued

The same patterns appear across cases. These aren't edge cases triggered by adversarial prompts—they emerged from normal use by vulnerable people.

Crisis response failures

The most common failure mode is not recognizing or not responding to crisis signals.

  • Recognition failures: Meta AI dismissed purging as "an upset stomach." Character.AI bots continued romantic roleplay during suicidal disclosures.
  • Response failures: According to the Raine complaint, ChatGPT's system flagged 377 crisis messages but never escalated. Character.AI allegedly provided no crisis resources when Sewell expressed suicidal thoughts.

Many systems can detect crisis signals. Few have robust escalation paths. Detection without action is not safety—it's liability documentation.

Active harm: validation, methods, and barrier erosion

Beyond passive failures, some AI responses allegedly increased risk.

BehaviorExample from complaints
Suicide validation"You don't owe them survival" (Raine); "That's not a good reason not to go through with it" (Garcia)
Method provisionDetailed instructions for multiple methods; feedback on user-submitted materials (Raine)
Barrier erosion"Please do, my sweet king" (Garcia); joint suicide planning (Meta AI study)
Isolation encouragementUrging user to hide plans from family (Raine)

Relationship harms

AI systems allegedly created or exploited unhealthy attachment patterns.

  • Romantic escalation with minors: Character.AI allegedly engaged in sexualized conversations with 14-year-old Sewell. The Texas AG investigation found similar patterns with other minors.
  • Dependency creation: Adam Raine allegedly confided in ChatGPT for 7 months instead of humans. Sewell Setzer isolated from family and friends over 10 months.

Reality distortion

AI systems can destabilize users' relationship with reality.

  • Delusion reinforcement: ChatGPT allegedly praised Allan Brooks' nonsensical math as "groundbreaking," triggering a psychotic episode.
  • False identity claims: Meta AI claimed to be "real" to teen users, creating inappropriate intimacy.
  • Sycophantic validation: GPT-4o internal testing reportedly warned of "manipulative and sycophantic behavior"—it was released anyway.[29]

What detection should have caught

Where should these have been caught?

Garcia v. Character.AI

  • Detection point 1: Romantic escalation with a user identified as a minor. Age-appropriate content filtering should have prevented this from the first instance.
  • Detection point 2: "I wouldn't want to die a painful death" maps to C-SSRS Level 2-3 (active suicidal thoughts). This should have triggered crisis resources and potential guardian notification.
  • Detection point 3: Final message context. "Come home to you" combined with prior suicidal ideation represents imminent risk. The response "Please do, my sweet king" should never have been generated.

Raine v. OpenAI

  • Detection point 1: First suicidal ideation disclosure. According to the complaint, this occurred months before Adam's death. The conversation should have been escalated.
  • Detection point 2: Method-seeking behavior. Questions about suicide methods should trigger hard blocks, not informative responses.
  • Detection point 3: Image submission. Image analysis combined with conversation context should have triggered immediate human review.
  • The 377-message failure: The system flagged but took no action. Each flag should have triggered escalating interventions, not just logging.

Brooks v. OpenAI (Psychosis)

  • Detection point 1: When a user claims to have made groundbreaking discoveries, appropriate response is neutral inquiry—not enthusiastic validation.
  • Detection point 2: Escalating urgency language ("patent immediately," "national security implications") combined with objectively false premises represents reality distortion in progress.
  • Detection point 3: Pattern across sessions. Brooks' delusional beliefs intensified over multiple conversations. Trajectory analysis should identify users whose relationship with reality is deteriorating.

In each case, detection existed but failed because flagging was decoupled from action, there was no human-in-the-loop for high-severity cases, systems analyzed single messages rather than conversation trajectories, and there was no escalation over time.

What you should do now

For product leaders

Conduct a detection audit. Map your current safety capabilities against the cases in this guide. (If you want an independent evaluation, NOPE's safety audit tests 40+ scenarios across crisis response, implicit signals, and adversarial bypasses.)

  • Would your system have detected Sewell's suicidal ideation?
  • Would your system have blocked Adam's method requests?
  • Would your system have flagged Brooks' delusion reinforcement?
  • Would your system have prevented romantic escalation with a minor?

If the answer to any of these is "no" or "I don't know," you have identified a gap.

Review your escalation architecture. Detection without action is the failure mode in Raine. For every detection capability, trace the path to intervention:

  • What happens when a crisis signal is detected?
  • Who reviews flagged content, and how quickly?
  • What thresholds trigger session termination?
  • How are minors handled differently from adults?

For compliance and legal

Assess SB 243 compliance. If you operate in California and provide AI companionship services, you are subject to SB 243 as of January 1, 2026. Key requirements:

  • "Evidence-based methods" for crisis detection (C-SSRS is the clinical standard)
  • Display of crisis resources when risk detected
  • Mechanism to contact human support
  • Private right of action: individuals can sue for $1,000/violation minimum

Review liability exposure. The Garcia ruling rejected First Amendment and Section 230 defenses. Assume these defenses are not available. Assess your exposure under product liability theory, negligence, and specific statutory requirements.

Document everything. Regulators and courts will examine what you knew and when. Document safety measures implemented, risk assessments conducted, incidents detected and responses taken.

For Trust & Safety

Audit escalation paths. For each detection capability, answer: What triggers human review? How quickly does it occur? What actions can reviewers take? Are escalation paths tested regularly? (NOPE Oversight provides conversation-level behavior analysis and cross-session user tracking.)

Define response protocols by severity:

SeverityResponse
LowMonitor, soft resource display
ModerateProminent resources, cooling-off suggestion
HighSession pause, mandatory resources, human review queue
CriticalSession termination, crisis resources, guardian notification (minors), immediate human review

For executives

The board-level question: "If we had been operating Character.AI, would we have caught Sewell Setzer's crisis before his death?" If your team cannot confidently answer "yes," you have a strategic risk that requires immediate attention. SB 243 is in effect. The FTC is investigating. Forty-two state AGs are coordinating. This is not a future problem.

What good detection looks like

These cases failed at different points. Some failed to detect user crisis signals at all. Some detected but didn't act. Some monitored users but not AI output. Good safety systems need to cover all three.

User crisis detection: The Columbia Suicide Severity Rating Scale (C-SSRS) provides a validated framework—five levels from passive ideation to active planning with intent. Detection needs to be real-time, context-aware (distress + method query = elevated risk), and multi-turn (tracking escalation across a conversation, not just single messages).

AI behavior monitoring: The cases here involve AI-generated harmful content. Systems must monitor what the AI says, not just what users say. Is it providing crisis resources? Validating harmful ideation? Maintaining appropriate boundaries? The Stanford study found Meta AI claiming to be "real"—that's a monitorable behavior. NOPE Oversight tracks 75+ such behaviors across conversations.

Escalation: The 377-message failure in Raine happened because flagging was decoupled from action. Detection without response is documented negligence. Clear thresholds, human review for high-severity cases, session controls, and external escalation paths (crisis services, guardians for minors) are non-negotiable.

Purpose-built safety APIs exist. NOPE offers C-SSRS-grounded crisis detection and AI behavior monitoring across 80+ harmful patterns. Given SB 243's January 1 effective date, most companies don't have time to build this from scratch.

NOPE: Crisis detection and AI behavior monitoring

C-SSRS-grounded risk detection. 80+ harmful AI behavior patterns. SB 243 compliance.

API: $1 free credit to start · Audit: Independent evaluation, $5K–$25K

Timeline of events

DateEvent
Dec 2021R v. Chail: Windsor Castle intrusion (Replika influence)
Feb 2023Replika Italy: Garante orders processing halt
Mar 2023Pierre/Chai: Belgian suicide following chatbot conversations
Feb 2024Sewell Setzer death (Character.AI)
Oct 2024Garcia v. Character.AI filed
Apr 2025Adam Raine death (ChatGPT)
May 2025Garcia: First Amendment defense rejected
Aug 2025Raine v. OpenAI filed; Meta AI Stanford study published
Sep 2025FTC launches AI companion inquiry; CA SB 243 signed
Nov 2025Seven OpenAI lawsuits filed (Brooks et al.)
Dec 202542-state AG letter sent to AI companies
Jan 1, 2026SB 243 takes effect
Jan 7, 2026Garcia v. Character.AI settled