Skip to main content

C-63

Online Harms Act (Bill C-63)

Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.

Jurisdiction

Canada

CA

Enacted

Unknown

Effective

Unknown

Enforcement

Not specified

Died on Order Paper — Parliament prorogued Jan 2025

Who Must Comply

This law applies to:

  • Would have applied to online platforms

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Safety Provisions

  • Would have created: Digital Safety Commission
  • Would have covered: child exploitation, non-consensual intimate images, extremism, content inducing self-harm in children
  • 24-hour takedown for CSAM and non-consensual intimate images

Quick Facts

Binding
No
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
No

Why It Matters

Would have been Canada's comprehensive online safety law. 'Content inducing children to harm themselves' directly addressed self-harm.

Recent Developments

Died Jan 2025. Bill C-9 (Combatting Hate Act, Sep 2025) covers hate speech Criminal Code provisions only—does NOT include Digital Safety Commission or platform duties. Comprehensive online platform regulation being developed separately.

Cite This

APA

Canada. (n.d.). Online Harms Act (Bill C-63). Retrieved from https://nope.net/regs/ca-c63

BibTeX

@misc{ca_c63,
  title = {Online Harms Act (Bill C-63)},
  author = {Canada},
  year = {n.d.},
  url = {https://nope.net/regs/ca-c63}
}

Related Regulations

In Effect AU Online Safety

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.

In Effect UK Online Safety

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect MY Online Safety

Malaysia OSA

Requires licensed platforms to implement content moderation systems, child-specific safeguards, and submit Online Safety Plans. Nine categories of harmful content regulated.

Failed CA AI Safety

AIDA

Would have regulated high-impact AI systems with potential penalties up to $25M or 5% global revenue. Part of Bill C-27 which died when Parliament ended.

In Effect US-CA AI Safety

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY AI Safety

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.