Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

How do the UK's Online Safety Act and other laws define prosecutable online hate speech?

Checked on November 23, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

The Online Safety Act (OSA) does not itself recast the criminal law on hate speech but creates duties on platforms to prevent illegal content — including hate crimes and content that “stirs up racial hatred” — and gives Ofcom powers to enforce those platform duties [1] [2]. Existing criminal offences (for example under the Public Order Act and other offences) remain the basis for prosecuting hate speech; the OSA strengthens regulatory levers so platforms must remove or mitigate illegal material or face fines [3] [4].

1. How the OSA fits with criminal law: regulator, not a new criminal code

The OSA primarily regulates internet services rather than creating a new standalone criminal offence of “online hate speech.” It requires in-scope platforms to take steps to protect users from illegal content and activity, and to have procedures for illegal content — duties that Ofcom enforces — while the criminal law (such as offences in older statutes like the Public Order Act) continues to define prosecutable conduct [1] [3] [4].

2. What “illegal content” includes under the OSA framework

The codes and guidance drafted around the Act explicitly identify categories platforms must tackle, including content that “stirs up racial hatred” and abuse targeting people on the basis of race; Ofcom can take enforcement action against platforms that fail to comply with these duties [2]. Government letters to Ofcom have emphasised action on antisemitic content and hate speech as priorities for enforcement under the Act [5].

3. Who takes action — police vs regulator vs platforms

Prosecution for criminal hate speech is still a matter for law enforcement and the criminal courts under existing offences; the OSA’s practical effect is that platforms have affirmative duties to remove or mitigate illegal content and Ofcom can sanction platforms that persistently fail to do so. Parliamentary debate and government materials stress that the Act gives regulators “stronger guidelines and powers” to compel platform compliance, rather than substituting for police prosecution [6] [3] [1].

4. The Act’s free‑speech safeguards and tensions

Government and legal commentary underline that the OSA includes obligations to have “particular regard” to free speech and privacy when platforms moderate content; Category 1 services will have extra duties to protect journalistic content and content of democratic importance — creating a legal balancing act between removing illegal hate and protecting lawful expression [7] [8]. Critics, including civil liberties groups cited in the reporting, argue the Act risks chilling lawful speech; supporters emphasise child protection and giving Ofcom clearer enforcement powers [8] [9].

5. Practical enforcement limits and phased implementation

Several commentators note that the OSA regime has been implemented in phases and that Ofcom needed time to consult on codes before most duties and enforcement powers “bite.” Consequently, regulatory action to curb online hate depends on Ofcom’s codes and enforcement priorities as much as on the statute’s text [7] [4]. Government materials and follow‑up letters indicate active engagement, but the effectiveness depends on how Ofcom translates duties into practice [5] [1].

6. Where prosecutions actually come from — pre‑existing offences

Reporting and analysis emphasise that many forms of prosecutable online hate speech — for example racially or religiously aggravated public order offences — are prosecuted under pre‑existing criminal statutes; the OSA amplifies pressure on platforms to prevent dissemination but does not replace those criminal provisions [3] [4]. Available sources do not mention any single new criminal offence created by the OSA that directly criminalises a broader category of “online hate speech.”

7. Competing viewpoints and implicit agendas

Supporters frame the OSA as closing gaps in platform accountability and protecting children and victims; critics (civil liberties groups, some tech platforms) warn it risks over‑removal and free‑speech harms. Some stakeholders (platforms, civil liberties groups, government departments) have clear institutional agendas — platforms worry about compliance burden, civil liberties organisations about expression and privacy, and government about public safety and political pressures to act on antisemitism and other forms of hate [10] [9] [11] [5].

8. Bottom line for someone asking “what is prosecutable?”

Whether online content is prosecutable depends on whether it meets established criminal thresholds in existing law (for example incitement, threats, racially or religiously aggravated offences) — the OSA’s role is to force platforms to prevent and remediate illegal content and to give Ofcom enforcement powers if they fail to do so. For precise definitions of prosecutable offences you must look to the relevant criminal statutes and case law; the current reporting emphasises the OSA’s regulatory, not criminalising, function [1] [3] [4].

Limitations: this summary is based only on the supplied reporting and government materials; available sources do not provide the text of all criminal statutes or Ofcom’s final codes in full, and they do not list any new standalone criminal offence created by the OSA itself [2] [5].

Want to dive deeper?
What specific offences and penalties does the UK's Online Safety Act create for hate speech online?
How do UK hate speech laws distinguish between criminal harassment, stirring up hatred, and protected free speech?
How have UK courts applied the Public Order Act, Communications Act and Malicious Communications Act to online hate speech cases?
How does the Online Safety Act interact with social media platforms' content moderation and notice-and-action obligations?
How do the UK's legal threshold and prosecutorial practices compare to EU member states and the US regarding online hate speech?