EU AI Act
Regulation (EU) 2024/1689 (Artificial Intelligence Act)
World's first comprehensive risk-based regulatory framework for AI systems. Classifies AI by risk level with escalating requirements from prohibited practices to high-risk obligations.
Jurisdiction
European Union
EU
Enacted
Jul 12, 2024
Effective
Aug 1, 2024
Enforcement
AI Office (European Commission) + national authorities
Phased implementation through 2027
What It Requires
Risk Assessment
Must evaluate and document potential harms before deployment
Transparency
Must disclose AI nature, data practices, or algorithmic decisions
Human Oversight
Must have human review for high-stakes decisions
Incident Reporting
Must notify authorities of specific incidents
Audit Trail
Must maintain logs of decisions and actions
Harms Addressed
Who Must Comply
Safety Provisions
- • Prohibited: AI exploiting vulnerabilities (age, disability) causing psychological harm
- • Prohibited: Social scoring, predictive policing based on profiling, emotion recognition in schools/workplaces
- • High-risk systems require: risk management, data governance, human oversight, transparency
- • Conformity assessments before market placement
- • Post-market monitoring and incident reporting
Compliance Timeline
Feb 2, 2025
Prohibited AI practices enforceable; AI literacy obligations
Aug 2, 2025
GPAI model obligations; Member States designate authorities
Aug 2, 2026
Full high-risk AI system requirements
Aug 2, 2027
Extended deadline for AI in regulated products
Enforcement
Enforced by
AI Office (European Commission) + national authorities
Penalties
€35M or 7% revenue (whichever higher)
€35M or 7% global turnover (prohibited practices); €15M or 3% (high-risk non-compliance)
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- Yes
Why It Matters
Article 5(1)(b) prohibits AI that exploits vulnerabilities (including age) to distort behavior causing significant harm. Sets global precedent for risk-based AI regulation.
Recent Developments
Code of Practice on AI-Generated Content (Article 50): First draft published Dec 17, 2025; stakeholder feedback deadline Jan 23, 2026; second draft expected mid-March 2026; final expected June 2026. Covers Article 50(2), (4), (5) obligations for providers and deployers. Voluntary but likely de facto compliance standard. AI Regulatory Sandboxes: Commission consultation launched Dec 2, 2025; each Member State must establish at least one by Aug 2, 2026. EU AI Office investigation into Meta WhatsApp Business APIs (Jan 2026) for allegedly restricting rival AI providers.
What You Need to Comply
You need: continuous monitoring systems that can identify when your AI causes psychological harm; documented risk management processes; ability to demonstrate harm prevention measures to regulators
NOPE can helpCite This
APA
European Union. (2024). Regulation (EU) 2024/1689 (Artificial Intelligence Act). Retrieved from https://nope.net/regs/eu-ai-act
BibTeX
@misc{eu_ai_act,
title = {Regulation (EU) 2024/1689 (Artificial Intelligence Act)},
author = {European Union},
year = {2024},
url = {https://nope.net/regs/eu-ai-act}
} Related Regulations
DSA
Comprehensive platform regulation with tiered obligations. VLOPs (45M+ EU users) face systemic risk assessments, algorithmic transparency, and independent audits.
EU CSAR (Proposed)
Proposed permanent framework replacing interim derogation. Parliament position (Nov 2023) limits detection to known/new CSAM, excludes E2EE services. Council has not agreed General Approach.
Netherlands Algorithmic Framework
Netherlands' algorithmic risk assessment framework specifically addressing mental health chatbots in risk reports and requiring Fundamental Rights Impact Assessment (FRIA).
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
CT SB 1295
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
NZ Biometric Code
Sets specific legal requirements under Privacy Act for collecting and using biometric data such as facial recognition and fingerprint scans. Prohibits particularly intrusive uses including emotion prediction and inferring protected characteristics like ethnicity or sex.