What are the penalties for posting hate speech on social media in the UK?

Checked on January 13, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Posting hate speech on social media in the United Kingdom can trigger criminal prosecution, with penalties ranging from fines to imprisonment, and can also lead to regulatory sanctions against platforms; the precise outcome depends on which statute is engaged, whether the communication is deemed “threatening or abusive” and intended to harass, alarm or distress, and whether an offence is aggravated by hostility toward a protected characteristic (fines and imprisonment are possible) [1] [2].

1. What legally counts as hate speech online

UK law criminalises communications that express hatred or are threatening or abusive when intended to harass, alarm or distress someone on grounds such as race, religion, sexual orientation, disability, nationality or gender, and that legal threshold applies to online posts as well as offline speech, meaning social-media content can be prosecuted under existing hate-speech and public-order offences [1] [3].

2. Criminal penalties: fines, imprisonment, or both

Where an online post meets the statutory test for a hate offence, courts can impose fines, prison sentences, or both: historical case law and government guidance make clear that custodial sentences have been used for serious online hate offences and financial penalties or community orders for others, reflecting the criminal framing of many hate-speech provisions [1] [4].

3. Aggravations and sentencing uplifts increase penalties

When hostility toward a protected characteristic is evidenced, sentencing uplifts under section 66 of the Sentencing Act 2020 and the racially or religiously aggravated offences in the Crime and Disorder Act 1998 allow courts to impose more severe penalties within existing maximums, effectively increasing the likely sentence for materially similar offending when motivated by hate [2] [5].

4. Platforms face separate regulatory penalties under the Online Safety Act 2023

Beyond individual criminal liability, the Online Safety Act 2023 places duties on social media companies to remove illegal content and to mitigate harmful but lawful content, with regulators empowered to fine firms and—at the extreme—seek to block or ban services that fail to comply; policy debates and statements suggest potential penalties could include substantial fines and, in exceptional government rhetoric, removal of services from the UK market [2] [6] [3].

5. How enforcement happens: police, CPS and content removal

Practical enforcement of online hate speech involves police investigation, charging decisions guided by the Crown Prosecution Service, and platform moderation; the CPS provides prosecutorial guidance for hate crime and can pursue charges where evidence meets tests for criminality, while platforms routinely remove content flagged as hate or illegal to avoid regulatory or reputational consequences [7] [8] [9].

6. Real-world illustration and moderation practice

Courts have imposed varied penalties for online hate: one reported sentencing included a suspended prison term, ASBO and unpaid work for religiously offensive public conduct, showing courts can combine custodial and ancillary orders for serious incidents, and transparency reports show platforms frequently remove content that edges toward illegal hate, often erring on the side of removal even where legality is finely balanced [1] [9].

7. The contested trade‑off: free speech, over‑removal and unequal thresholds

The law’s scope and the new regulatory regime have sparked intense debate: human‑rights advocates warn that duties on platforms may lead to over‑removal of lawful but controversial speech, while victims’ advocates and officials argue stronger rules are needed to prevent real-world harm and violence; commentators also note inconsistencies—such as different legal thresholds for racial versus religious hatred—that complicate enforcement and public understanding [10] [3].

8. Bottom line for someone posting on social media

Posting content that crosses into criminal hate speech exposes an individual to prosecution and potential fines or imprisonment, and even where criminality is borderline the post can be removed and the poster sanctioned by the platform under the Online Safety Act regime; however, exact penalties depend on the specific offence, aggravating factors and prosecutorial discretion, and the broader policy environment continues to evolve [1] [2] [3].

Want to dive deeper?
What offences and maximum sentences apply to racially or religiously aggravated online communications in the UK?
How does the Online Safety Act 2023 define platforms’ duties and the fines or sanctions they face for failing to remove hate speech?
What guidance does the Crown Prosecution Service use to decide whether an online post should be charged as a hate crime?