How have platform content‑moderation policies in Australia changed since the passage of the Combatting Antisemitism, Hate and Extremism Act 2026?
Executive summary
Since the Combatting Antisemitism, Hate and Extremism Act 2026 was rushed through Parliament in January 2026, federal law expanded criminal offences tied to the online spread of extremist and hateful material and added new migration and customs powers—measures that compelled online platforms to reassess risk and compliance, particularly around violent extremist content and youth-directed material [1] [2] [3]. Reporting and government documents describe the legal changes and the likely compliance obligations for platforms, but do not provide a comprehensive catalogue of specific platform policy revisions, so definitive claims about every major platform’s rulebook are not supported by the available sources [4] [5].
1. The legal change that matters to platforms: carriage services and aggravated online offences
The Act creates a new aggravated offence for using a carriage service to disseminate violent extremist material, and a separate aggravated offence where an adult shares violent extremist material with someone under 18, clearly broadening criminal liability for online communications and signalling a lower tolerance for extremist content distributed via social media and messaging services [3] [2]. Parliament’s explanatory materials and fact sheet frame these measures as targeted responses to online radicalisation and the spread of hate following the Bondi Beach attack, and they explicitly tie the law to communications delivered by “carriage services,” the statutory term that covers internet platforms [4] [3].
2. Regulatory and enforcement hooks that press platforms to act
Beyond criminal offences, the Bill amends migration and customs law to allow visa refusals or cancellations and to prohibit import/export of violent extremist goods and prohibited hate symbols, extending the government’s regulatory reach into content-adjacent goods and individuals’ movement—factors that create operational compliance responsibilities for platforms that host marketplaces or assist with cross-border transactions, and that may heighten government expectations for notice-and-takedown cooperation [3] [2]. The government’s public messaging characterised the package as comprehensive and punitive, designed to “ensure those that seek to spread hate, division and radicalisation are met with severe penalties,” which sets a political climate in which platforms face pressure to demonstrate active moderation [2] [3].
3. How platforms are likely to have shifted (what the evidence supports and what it does not)
Given the law’s focus on carriage services and youth-targeted aggravation, the available sources make it reasonable to expect platforms tightened rules on violent extremist content, implemented stricter age-gating and escalated moderation of content shared with minors, and refined processes for removing prohibited symbols or material flagged under the new customs regime; however, the government material and parliamentary reviews describe the statutory framework and policy intent but do not document specific corporate policy changes by Facebook, X, TikTok or others, so claims about concrete platform rule edits or enforcement levels are not documented in the provided reporting [3] [4] [6]. Media coverage notes the law’s rapid passage and political imperative, which itself is likely to have accelerated platform compliance conversations, but direct evidence of policy text changes or enforcement outcomes is absent from these sources [7] [8].
4. Legal and civic pushback that shapes platform responses
Legal stakeholders flagged the need for deliberation and procedural safeguards: the Law Council supported the bill’s objective but emphasised the importance of consultation, legal clarity and time for scrutiny—feedback that could temper overbroad private enforcement by platforms wary of legal ambiguity and reputational risk [5]. Parliamentary review processes were underway for the exposure draft, and the Act was scheduled for a statutory review of effectiveness, an institutional mechanism that could influence how aggressively platforms interpret new offences while awaiting judicial and administrative guidance [6] [7].
5. Bottom line — change driven by law and political pressure, but evidence gaps remain
Australia’s 2026 law materially raised the legal stakes for online dissemination of hateful and extremist content, particularly where young people are involved, and added customs and migration levers that broaden government influence over digital ecosystems, creating clear incentives for platforms to tighten moderation and cooperate with authorities [3] [2]. The publicly available government materials and parliamentary reporting document the statutory changes and political context but do not catalogue how individual platforms rewrote their policies or how moderators’ practices changed in practice, so assertions about the exact contours of platform policy amendments exceed what these sources support [4] [6].