Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

What are the key UK laws governing online hate speech?

Checked on November 10, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

The key UK laws governing online hate speech combine criminal offences that target stirring up hatred and threatening or abusive communications with a newer regulatory regime that places duties on online platforms; the Public Order Act 1986, the Racial and Religious Hatred Act 2006, the Hate Crime and Public Order (Scotland) Act 2021 and the Online Safety Act 2023 are central pieces of the framework. Enforcement is shared between criminal prosecutors, police units and a regulatory authority (Ofcom) empowered to compel platform action and levy large fines; practical application reveals legal thresholds, implementation gaps and debates about free speech and proportionality [1] [2] [3].

1. What advocates and prosecutors say the law actually covers — clear criminal routes to punishment

UK criminal law targets expressions that constitute a criminal offence: incitement or stirring up of hatred on protected characteristics, racially aggravated offences, and communications that are threatening, abusive or grossly offensive when sent via public communications. Prosecutors rely on the Public Order Act and the Racial and Religious Hatred Act in England and Wales, while Scotland operates under its own Hate Crime and Public Order Act 2021; these laws allow charges where speech meets statutory thresholds such as intent or likelihood to stir up hatred, or where messages are threatening and abusive [1] [4]. The Director of Public Prosecutions provides prosecutorial guidance and police units and specialist hubs investigate online reports, reflecting a criminal-law-first approach for the most serious online hate [5].

2. A regulatory revolution — the Online Safety Act moves platform duty of care centre-stage

The Online Safety Act 2023 introduces a statutory duty of care on major platforms to identify, mitigate and remove illegal content and to manage “legal but harmful” material with requirements especially focused on protecting children and adults from harm; Ofcom gains powers to impose fines up to billions in aggregate terms and to issue enforcement notices, shifting significant operational responsibility to companies to moderate content proactively [3] [6]. Government and academic observers emphasise that success depends on enforcement choices and how “illegal” and “harmful” categories are defined in guidance and secondary rules, creating a system where platform practices and Ofcom discretion heavily shape outcomes [7] [3].

3. How enforcement and reporting work in practice — police, prosecutors, platform reporting and the new hub model

Operational responses now range from individual reports to website administrators to referrals to police and specialist units such as the National Online Hate Crime Hub; platforms must provide complaint-handling, transparency and appeals mechanisms under the Online Safety Act, while criminal law retains authority to prosecute the most severe offences [5] [8]. The interplay means many incidents are dealt with administratively by platforms; only a subset reach criminal investigation, and the Crown Prosecution Service applies thresholds for evidence and public interest when deciding charges, reflecting the dual-track nature of enforcement between private moderation and public prosecution [5] [4].

4. Where law and practice clash — grey areas, human rights tensions and reform debates

Legal commentators and stakeholders note fragmentation and uncertainty: long-standing criminal statutes require intent or a high evidential threshold in many cases, while the Online Safety Act imposes broader duties that can touch on “legal but harmful” speech, producing tension between freedom of expression under the Human Rights Act and protective regulation [2] [7]. The Law Commission’s 2021 recommendations for consolidated hate crime definitions remain largely unimplemented, and critics warn that inconsistent guidance, prosecutorial caution and platform enforcement discretion create uneven outcomes and potential over- or under-enforcement depending on how Ofcom and companies operationalise duties [8] [2].

5. What this means going forward — balance, enforcement choices and pressure points

The combined framework means that the UK treats online hate through criminal law for the most serious acts and regulatory obligations for platform governance, making OFCOM, major platforms, police and prosecutors the decisive actors in outcomes [3] [5]. Key pressure points to watch are the secondary regulations and Codes of Practice that define illegal content and platform duties, Ofcom’s enforcement approach, and whether government accepts Law Commission reform proposals — each will materially affect thresholds for removal, appeals rights, and the balance between safety and free expression in the coming years [7] [8] [2].

Want to dive deeper?
What is the Online Safety Act 2023 and its impact on online hate speech?
How does the Communications Act 2003 regulate online hate speech in the UK?
What are the penalties for posting hate speech on social media in the UK?
How do UK hate speech laws differ from those in the US?
Recent prosecutions under UK online hate speech laws examples