Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: What are the laws governing online hate speech in England?

Checked on October 3, 2025

Executive Summary

The legal framework for online hate speech in England combines criminal law offences and a new regulatory regime that places duties on platforms; the Online Safety Act 2023 created statutory obligations for tech companies to tackle illegal and harmful content, while legacy criminal offences under the Public Order Act and communications laws continue to be used in prosecutions [1] [2] [3]. Debate is active: critics warn of heavy-handed application and high arrest figures for online posts, while government and police emphasise public safety and the need to curb racial and religious hatred and disorder [4] [5] [6].

1. New platform duties meet old criminal offences — why that matters now

England now operates a two‑track system: platform regulation under the Online Safety Act 2023 compels companies to identify, remove, and mitigate illegal content, and provides for enforcement and fines; this sits alongside longstanding criminal offences like stirring up racial or religious hatred and public order offences that prosecutors can apply to online conduct [1] [2]. The Online Safety Act is framed as protecting both children and adults from harm, giving regulators clearer powers to require risk assessments and content removal, but it does not replace criminal law; instead, platforms must act faster when content is illegal and regulators may fine or require remediation where companies fail [1] [2].

2. Arrest figures and prosecutions — the scale and the controversy

Reporting indicates roughly 12,000 arrests a year for online posts described as grossly offensive, menacing, or causing anxiety and inconvenience, a figure that fuels concerns about free expression and proportionality [5] [4]. Critics argue many of the statutes predate social media and were drafted for different contexts, meaning policing discretion can convert contentious speech into criminal cases; supporters counter that serious instances — including posts that incite violence or racial hatred — justify robust enforcement, as illustrated by a 2024 conviction under section 19[7] of the Public Order Act carrying a custodial sentence [3] [4]. The tension between safeguarding and liberty drives public debate [5] [3].

3. What counts as illegal hate speech under criminal law — the boundaries

Criminal offences used online include incitement to racial or religious hatred, harassment, and public order offences when speech is deemed likely to provoke violence or disorder; courts assess context, intent, and effect, so identical words can produce different legal outcomes depending on circumstances [2] [3]. The Public Order Act and related statutes criminalise stirring up hatred or using threatening, abusive, or insulting words intended to cause harassment, alarm, or distress, with specific provisions applied in prosecutions during episodes of public disorder where online posts amplify harm [3] [2]. Legal thresholds remain contested and shaped by case law.

4. Platform responsibility under the Online Safety Act — what companies must do

The Online Safety Act imposes systemic duties on social media and other online services to assess risks, implement content moderation systems, and remove illegal content promptly; regulators can require codes of practice and impose fines for non‑compliance [1] [2]. The law aims to make platforms accountable for both illegal hate speech and harmful but lawful content by mandating proportionate mitigation and transparency, yet enforcement details and operational impacts remain under development, and campaigners note the law’s effectiveness depends heavily on regulator capacity and platform compliance [1] [2] [6].

5. “Lawful but awful” content and algorithmic harms — the policing gap

Police and counter‑terror officials have highlighted how algorithms can amplify hateful or harmful content even when it falls short of illegality, creating public harm without clear criminal remedies; this has prompted calls for stronger regulatory action and algorithmic oversight to address amplification [8] [1]. The Online Safety Act targets platform practices, but critics warn of enforcement limits and the persistence of high‑engagement content that is lawful yet socially damaging; this tension underscores why both statutory policing and platform governance are being pushed simultaneously [8] [1].

6. Civil liberties concerns and political narratives shaping the debate

Civil liberties groups warn that expansive application of historic offences and vigorous platform duties risk chilling legitimate debate, citing high arrest numbers and arrests for posts categorized as offensive or menacing [4] [5]. Public figures and commentators amplify these concerns for varied agendas: some emphasise free speech erosion and analogy to “Orwellian” censorship, while others stress the need to protect vulnerable communities from hate and disorder; these narratives shape media coverage and political pressure around enforcement and regulatory detail [9] [4] [6].

7. Where gaps remain and what to watch next

Implementation of the Online Safety Act, uptake of regulator codes, and judicial interpretations of existing criminal offences will determine practical outcomes; key gaps include regulator resourcing, transparency of moderation decisions, and consistent prosecutorial guidelines, and the Law Commission’s further recommendations remain only partly implemented according to some assessments [6] [1]. Watch for published regulator guidance, high‑profile prosecutions that test legal thresholds, and data on removals and arrests — these will reveal whether the system reduces harmful speech without unduly constraining lawful expression [6] [5].

Want to dive deeper?
What is the definition of hate speech under English law?
How does the UK's Online Safety Bill impact hate speech laws in England?
What are the penalties for online hate speech convictions in England?
How do English courts determine what constitutes online hate speech?
What role does the UK's Crown Prosecution Service play in prosecuting online hate speech cases in England?