What is the Online Safety Act 2023 and its impact on online hate speech?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
The Online Safety Act 2023 (OSA) is a UK law that imposes duties on social media and search services to protect users from illegal content and certain harms online, and it gives Ofcom powers to enforce those duties with fines and notices [1] [2]. Its impact on online hate speech is to make platforms legally responsible for removing illegal hate content and taking preventive steps, while raising ongoing concerns about overreach, free‑speech tradeoffs, and technical and enforcement limits [2] [3].
1. What the Online Safety Act 2023 actually does: duties, scope and purpose
The Act, passed on 26 October 2023, creates a detailed regulatory regime that places statutory duties on user-to-user services and search engines to protect children and adults from a range of harms and to remove or mitigate illegal content, with Ofcom managing phased implementation and enforcement [1] [4]. The law is long and complex—241 sections and 17 schedules—and explicitly targets illegal content including terrorism material, image-based abuse, and categories of hate speech while also addressing content harmful to children and disinformation [4] [2].
2. How the OSA treats hate speech and related offences
The Act requires platforms to remove illegal content including hate speech that meets criminal thresholds and to have systems addressing the spread of such material; statutory duties cover “priority illegal content” and offences such as threatening or false communications which can intersect with racial or inciting messages [2] [5] [6]. Parliamentarians and the government state the regime “safeguards free speech” in that lawful but offensive content is not banned by the Act, but platforms must still act to prevent illegal hate and incitement [7].
3. Enforcement powers and technical controversies (including encryption)
Ofcom can impose strict penalties for non‑compliance and issue notices to platforms; critics point out the Act retains provisions allowing Ofcom to require changes that could weaken end‑to‑end encryption, even if ministers said immediate use of those powers would be limited until “technically feasible,” creating a latent technical and privacy concern [8] [7]. The phased rollout and the regulator’s roadmap mean many duties only became actionable in 2024–25, so enforcement practice and case law are still evolving [1].
4. How platforms are likely to respond and the real‑world moderation effects
Because platforms face fines and liability exposure for failing to remove illegal hate content, academic and expert commentary predicts a combination of automated detection, human moderation, and conservative takedown practices—raising the risk that borderline or context‑sensitive speech may be removed pre‑emptively [2] [3]. Evidence from other jurisdictions and prior regulatory pushes suggests companies may err on the side of removal to avoid sanctions, even if empirical results on over‑removal vary by platform and context [2] [3].
5. Critiques, rights trade‑offs and what remains uncertain
Civil liberties groups, academics and some politicians warn the OSA risks chilling lawful expression, silencing marginalised voices, and creating unclear boundaries for “harmful” speech, with critiques noting that the Act’s scope around non‑criminal harmful content is not always precisely defined and could lead to inconsistent enforcement [8] [3] [9]. Legal scholarship flags tensions between duties to protect free expression and to prevent harms, and case examples—such as arrests linked to online false communications—show the Act’s criminal and civil effects are already being tested in the courts and policing practice [10] [6].
6. Bottom line: strengthened removal of illegal hate speech, constrained by legal, technical and political limits
In practice the OSA strengthens the legal obligation on platforms to tackle illegal hate speech and gives Ofcom teeth to demand compliance, which should increase removals of content that clearly meets criminal thresholds, yet its broader impact depends on regulator guidance, platform moderation strategies, and judicial and civil‑society pushback; significant uncertainties remain about borderline cases, potential over‑removal, and the implications for encryption and privacy [2] [7] [8]. Reporting and academic commentary show the Act shifts responsibility from state prosecution alone to platform governance, but the final balance between safety and free expression will be resolved only through implementation, litigation, and public scrutiny [4] [3].