AU Online Safety Act
Online Safety Act 2021
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Jurisdiction
Australia
AU
Enacted
Unknown
Effective
Jan 23, 2022
Enforcement
eSafety Commissioner
What It Requires
Harms Addressed
Who Must Comply
This law applies to:
- • Online services available to Australians
Capability triggers:
Who bears obligations:
Safety Provisions
- • Removal notices with 24-hour compliance requirement
- • Class 1 material (CSAM, terrorism, extreme violence) must be removed
- • Basic Online Safety Expectations: safety by design, responsiveness, transparency
- • Industry codes for harmful content categories
Compliance Timeline
Jan 23, 2022
Act fully in force
Jan 1, 2024
Phase 1 Unlawful Material Codes in operation
Dec 27, 2025
Phase 2 first tranche codes effective (hosting, search engines)
Mar 9, 2026
Phase 2 second tranche codes effective (social media, apps)
Jun 27, 2026
Search engines implement logged-in age assurance
Sep 9, 2026
App stores implement age assurance for 18+ apps
Enforcement
Enforced by
eSafety Commissioner
Penalties
A$825K/day
Up to AUD $825,000 per day for corporations breaching removal notices
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- No
Why It Matters
Australia is among the most explicit and proactive regulators on AI chatbot safety and self-harm specifically. eSafety actively targeting AI chatbots with legal notices and enforcement powers. Non-compliance with reporting notices: up to AUD $825,000/day.
Recent Developments
eSafety issued legal notices to AI companion providers (Oct 23, 2025) demanding child safety control explanations, explicitly citing suicide/self-harm risks. Companies notified: Character.AI (~160K Australian MAUs as of June 2025), Nomi (Glimpse.AI), Chai Research, Chub AI.
What You Need to Comply
You need: rapid detection systems to meet 24-hour removal deadlines; safety-by-design controls to reduce harmful content exposure. If you run an AI chatbot, eSafety is actively demanding documentation of your suicide/self-harm detection capabilities—be prepared to explain your approach.
NOPE can helpCite This
APA
Australia. (2022). Online Safety Act 2021. Retrieved from https://nope.net/regs/au-osa
BibTeX
@misc{au_osa,
title = {Online Safety Act 2021},
author = {Australia},
year = {2022},
url = {https://nope.net/regs/au-osa}
} Related Regulations
C-63
Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
AU Deepfake Sexual Material Act
Creates Commonwealth criminal offences for "deepfake sexual material" (AI/synthetic intimate imagery) without consent. Part of Australia's layered approach: criminal law + eSafety platform enforcement.
AU Social Media Age Ban
World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.
State AG AI Warning
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
China FR Security Measures
Comprehensive facial recognition regulation requiring consent, protecting minors, restricting public space use, mandating data localization, and requiring filing for large-scale processing (100K+ individuals).