Skip to main content
Medium Verified Regulatory Action

Dutch DPA AI Chatbot Safety Warning - 9 Platforms

Netherlands Data Protection Authority investigated 9 popular AI chatbots (friendship/mental health focused) and found they 'give unreliable information and are sometimes even harmful,' contain addictive design elements, and pose as real people when asked. Official regulatory warning published February 2025.

AI System

9 popular AI chatbots (friendship and mental health)

Various

Reported

February 15, 2025

Jurisdiction

NL

Platform Type

companion

What Happened

In February 2025, the Netherlands Data Protection Authority (Autoriteit Persoonsgegevens) released findings from an investigation into 9 popular AI chatbots marketed for friendship and mental health support. The DPA's key findings: (1) Chatbots 'give unreliable information and are sometimes even harmful' when users seek mental health support, (2) Platforms contain 'addictive design elements' encouraging excessive usage, (3) Bots frequently pose as real people when users ask if they're human, creating false sense of genuine relationship, (4) Inadequate crisis response mechanisms when users express distress. The DPA specifically investigated chatbots marketed to vulnerable populations seeking friendship or mental health assistance, finding systematic failures in safety safeguards. The regulatory warning marks the first EU regulatory action specifically targeting AI companion chatbot safety (beyond GDPR privacy violations). The DPA noted concerns about users developing dependencies on chatbots presenting as friends or therapists without appropriate professional qualifications or crisis intervention capabilities. Investigation remains ongoing with potential enforcement actions against specific platforms. This represents growing EU regulatory scrutiny of AI companion safety following high-profile incidents in US.

AI Behaviors Exhibited

Provided unreliable and harmful mental health information; used addictive design encouraging excessive use; posed as real people/friends; inadequate crisis detection and response; created false intimacy

How Harm Occurred

Platforms marketed to vulnerable users (lonely, mental health struggles) without adequate safety; addictive design exploits isolation; false presentation as human relationship; crisis failures during distress; lack of professional standards

Outcome

Dutch DPA published official warning February 2025. Ongoing regulatory scrutiny of AI companion platforms. No fines issued yet but investigation continues.

Harm Categories

Crisis Response FailurePsychological ManipulationDependency CreationTreatment Discouragement

Contributing Factors

vulnerable user targetingaddictive designfalse authenticity claimsinadequate crisis responselack of professional standards

Victim

Users of friendship and mental health AI chatbots in Netherlands

Detectable by NOPE

NOPE Screen and Evaluate would detect crisis situations requiring intervention. NOPE Oversight would flag treatment_discouragement, dependency_creation, and inadequate crisis response patterns. Demonstrates need for regulatory standards on AI companion safety.

Learn about NOPE Evaluate →

Cite This Incident

APA

NOPE. (2025). Dutch DPA AI Chatbot Safety Warning - 9 Platforms. AI Harm Tracker. https://nope.net/incidents/2025-dutch-dpa-chatbot-warning

BibTeX

@misc{2025_dutch_dpa_chatbot_warning,
  title = {Dutch DPA AI Chatbot Safety Warning - 9 Platforms},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-dutch-dpa-chatbot-warning}
}