Skip to main content

C-63

Online Harms Act (Bill C-63)

Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.

Jurisdiction

Canada

Enacted

Pending

Effective

TBD

Enforcement

TBD

Died on Order Paper — Parliament prorogued Jan 2025

Parliament of Canada

Why It Matters

Would have been Canada's comprehensive online safety law. 'Content inducing children to harm themselves' directly addressed self-harm.

Recent Developments

Died Jan 2025. Bill C-9 (Combatting Hate Act, Sep 2025) covers hate speech Criminal Code provisions only—does NOT include Digital Safety Commission or platform duties. Comprehensive online platform regulation being developed separately.

At a Glance

Applies to

Social PlatformOnline PlatformGeneral Chatbot

Who Must Comply

  • Would have applied to online platforms

Safety Provisions

  • Would have created: Digital Safety Commission
  • Would have covered: child exploitation, non-consensual intimate images, extremism, content inducing self-harm in children
  • 24-hour takedown for CSAM and non-consensual intimate images

View on map

Canada

Focus Areas

Mental health & crisis
Child safety

Cite This

APA

Canada. (n.d.). Online Harms Act (Bill C-63).

Related Regulations

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect IE

Ireland OSMR

Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.

In Effect AU

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.

Failed CA

AIDA

Would have regulated high-impact AI systems with potential penalties up to $25M or 5% global revenue. Part of Bill C-27 which died when Parliament ended.

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

Last updated January 22, 2026. Verify against primary sources before relying on this information.