What are the UK's laws and penalties for online hate speech?
Executive summary
The UK does not have one single “hate speech law”; instead online hate speech is prosecuted through a patchwork of offences (Public Order Act provisions, Scotland’s separate law, and offences captured as hate crime if motivated by protected characteristics) and regulated via the Online Safety Act that forces platforms to remove illegal content [1] [2] [3]. Penalties range from recording “non‑crime hate incidents” up to fines and imprisonment where communications meet criminal thresholds; sentencing uplifts apply when an offence is motivated by a protected characteristic under statutes including the Crime and Disorder Act 1998 and the Sentencing Act 2020 [4] [5] [6].
1. A mosaic of laws, not a single statute
UK law treats hate speech as a collection of offences across statutes and jurisdictions rather than a single consolidated law. England and Wales rely on Public Order Act offences and other criminal provisions; Scotland passed its own Hate Crime and Public Order (Scotland) Act; the Law Commission has recommended targeted reforms and protections to make sure only “the most egregious” speech is criminalised [1] [2] [7].
2. When online content becomes a crime
Police and specialist reporting sites say online material is recorded as a ‘hate crime’ only when it meets the threshold of an existing criminal offence coupled with a hate motivation; if it’s hate‑motivated but not criminal it may be recorded as a ‘non‑crime hate incident’—an important distinction that shapes whether someone faces prosecution [4]. The CPS trains prosecutors to consider motivation tied to protected characteristics when deciding charges [5].
3. Protected characteristics and sentencing uplifts
Legislation and prosecutions turn on protected characteristics: race, religion, sexual orientation, disability and, increasingly, transgender identity and others. When an offence is motivated by such characteristics prosecutors can seek an uplift in sentence under laws including the Crime and Disorder Act 1998 and section 66 of the Sentencing Act 2020 [5] [6]. The Equality and Human Rights Commission reports ongoing government reviews and commitments to extend aggravated offences to additional characteristics [8].
4. The Online Safety Act and platform liability
The Online Safety Act 2023 requires platforms to remove content that is “illegal material” under UK law and gives regulators powers to penalise platforms that fail to act; advocates and officials have cited the Act as strengthening the state response to online hate though critics warn it risks chilling lawful expression [3] [9]. News reporting ties the Act to real‑world responses to online harm while also recording broad criticism from rights groups [3].
5. Penalties: from removal and fines to imprisonment
Available sources summarise that penalties include content removal, fines, and imprisonment depending on the offence and its severity. Wikipedia and news sources state that threatening or abusive communications intended to harass, alarm or distress can carry fines or prison sentences; platform breaches of the Online Safety Act can also draw regulatory penalties [6] [9] [3]. Exact maximum sentences depend on the specific criminal provision charged—sources do not provide a single capped penalty figure for “hate speech” [6] [4].
6. Free speech safeguards and contested lines
Reform bodies like the Law Commission explicitly link new criminal measures to protections for freedom of expression so that only “the most egregious hate speech” is criminalised, reflecting a policy tension: protect targets of hate while avoiding criminalising offensive but lawful debate [7]. Human rights and civil liberties groups have argued that broad regulatory powers under the Online Safety Act risk over‑removal and privacy trade‑offs [3].
7. Where jurisdiction and enforcement get messy
Police guidance notes that the courts’ jurisdiction depends on who posts or controls the material in the UK; content hosted abroad may be outside UK courts even if accessible here, complicating enforcement [4]. Platforms’ changing internal policies (for example on gender identity) create further practical inconsistency when comparing legal duties and company rules—some reporting notes platforms have altered their hate‑speech protections in recent years [10].
8. What the sources don’t settle
Available sources do not offer a single comprehensive list of every statutory maximum sentence for every offence used to prosecute online hate speech; they do not provide prosecution rates or a statistical breakdown of outcomes for online cases in the materials supplied (not found in current reporting). They also do not settle the normative debate over whether the Online Safety Act’s powers are proportionate; instead they report both official aims and civil‑society criticisms [3] [7].
Policy and practice in the UK are evolving: government reviews, Law Commission recommendations and the Online Safety Act have reshaped responsibilities for platforms and prosecutors, while police guidance and CPS training maintain the central legal principle that only communications meeting criminal thresholds (and often a hate motivation) will lead to prosecution [7] [4] [5].