AU Online Safety Act
Online Safety Act 2021
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Jurisdiction
Australia
Enacted
Pending
Effective
Jan 23, 2022
Enforcement
eSafety Commissioner
Why It Matters
Australia is among the most explicit and proactive regulators on AI chatbot safety and self-harm specifically. eSafety actively targeting AI chatbots with legal notices and enforcement powers. Non-compliance with reporting notices: up to AUD $825,000/day.
Recent Developments
eSafety issued legal notices to AI companion providers (Oct 23, 2025) demanding child safety control explanations, explicitly citing suicide/self-harm risks. Companies notified: Character.AI (~160K Australian MAUs as of June 2025), Nomi (Glimpse.AI), Chai Research, Chub AI.
At a Glance
Applies to
Harms addressed
Who Must Comply
- Online services available to Australians
Obligations fall on:
Safety Provisions
- Removal notices with 24-hour compliance requirement
- Class 1 material (CSAM, terrorism, extreme violence) must be removed
- Basic Online Safety Expectations: safety by design, responsiveness, transparency
- Industry codes for harmful content categories
Compliance & Enforcement
Key Dates
Jan 23, 2022
Act fully in force
Jan 1, 2024
Phase 1 Unlawful Material Codes in operation
Dec 27, 2025
Phase 2 first tranche codes effective (hosting, search engines)
Mar 9, 2026
Phase 2 second tranche codes effective (social media, apps)
Jun 27, 2026
Search engines implement logged-in age assurance
Sep 9, 2026
App stores implement age assurance for 18+ apps
Penalties
A$825K/day
View on map
Australia
Focus Areas
Cite This
APA
Australia. (2022). Online Safety Act 2021.
Related Regulations
AU OSA Phase 2 Codes
Phase 2 industry codes under Australia's Online Safety Act extending age-restricted material obligations to AI companion chatbots, generative AI services, search engines, app stores, and gaming platforms. Requires robust age assurance, prohibits AI-generated sexually explicit conversations with minors, and mandates suicide/self-harm content safeguards.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Ireland OSMR
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
AU Social Media Age Ban
World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Last updated February 17, 2026. Verify against primary sources before relying on this information.