What are the specific laws in England regarding online hate speech?

Checked on January 25, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

England does not rely on a single "hate speech" statute but on a patchwork of criminal offences and regulatory duties that make certain online expressions illegal when they cross defined thresholds — for example, threatening, abusive or intentionally harassing communications, or stirring up hatred against protected groups — while preserving a wide scope for offensive but lawful speech [1] [2] [3]. Newer regulatory tools require platforms to police illegal hate content proactively under the Online Safety Act 2023, even as courts, prosecutors and campaigners dispute where the line between criminality and free expression should fall [4] [5] [6].

1. Primary criminal statutes that cover online hate speech

The core criminal law used to prosecute unlawful online hateful expression in England comes from the Public Order Act 1986 (Part 3) which prohibits expressions of racial hatred and related offences, supplemented by other offences that criminalise threatening or abusive communications and incitement to hatred against protected characteristics [1]. Aggravated offences and sentencing uplifts exist under the Crime and Disorder Act 1998 and the Sentencing Act 2020, allowing greater penalties where hostility to a protected characteristic motivated the underlying offence [7] [8].

2. How speech becomes criminal — thresholds and protected characteristics

Not every offensive or insulting online comment is illegal: prosecutors apply thresholds such as whether a message is threatening, abusive, or intended to harass, alarm or distress, or whether it stirs up hatred against groups defined by race, religion, sexual orientation, disability, gender identity and similar characteristics — definitions and thresholds vary across statutes and circumstances [1] [8]. Rights groups and charities emphasise that courts require a high evidential threshold in cases of “grossly offensive” communications to balance freedom of expression against harm [3].

3. Recording, prosecution and non-crime incidents

Police record online “hate material” as a hate crime only when a statutory offence has been committed with hate motivation; material that is hateful but not criminal is often logged as a “non-crime hate incident,” which can still influence policing and community response though it does not lead directly to prosecution [2]. The Crown Prosecution Service trains prosecutors on applying hate crime legislation and can seek sentence uplifts where hostility is established under existing law [7].

4. Platforms, regulation and the Online Safety Act 2023

Beyond criminal law, the Online Safety Act places duties on social media and search services to identify, remove or limit illegal content — including racist, antisemitic, homophobic or misogynistic abusive material — and to put accessible reporting and redress tools in place, shifting part of enforcement responsibility onto private platforms subject to Ofcom oversight [4]. The Act requires platforms to remove content that is illegal under UK law and to adopt proactive measures, a change framed by regulators and some policing leaders as essential to curb online harms that have spilled into real-world violence [5] [4].

5. Tensions, inconsistencies and reform debates

Legal commentators, human-rights groups and the Law Commission stress that the UK’s approach is a product of competing aims: protecting vulnerable groups while safeguarding free speech, which has produced inconsistent thresholds (for example differences in protection between racial and religious hatred) and ongoing calls for reform to clarify the law and limit criminalisation to the most egregious cases [5] [6]. Critics warn the Online Safety Act risks chilling lawful expression because platforms face penalties for failing to remove illegal material but little sanction for over-removal, while the Law Commission has recommended reforms coupled with express protections for freedom of expression to avoid criminalising borderline speech [5] [6].

6. Bottom line and limits of available reporting

The practical rule in England is that online speech is legal unless it meets statutory tests — threatening, abusive, intentionally harassing communications or conduct that stirs up hatred toward protected groups — with police, prosecutors and now regulators guiding where prosecution or platform action is appropriate; however, exact contours remain contested and in flux as new guidance, regulations and reform proposals are rolled out [1] [2] [4] [6]. Reporting reviewed for this analysis covers primary statutes, prosecutorial practice and the Online Safety Act but does not exhaust case law nuances or the full set of statutory provisions and pending regulations that will further define enforcement in practice [1] [4].

Want to dive deeper?
How does the Online Safety Act 2023 define 'illegal content' for social media platforms in practice?
What are recent landmark UK court cases that shaped the threshold for 'grossly offensive' communications?
How do England's hate speech laws compare with those in Scotland and other European countries?