Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
What UK offenses are classified as social media-related crimes?
Executive summary
UK authorities and prosecutors now routinely treat a wide range of online communications as potential criminal offences: custody and arrest data cited by multiple outlets show roughly 10–12,000 arrests a year for messages or posts between 2021–2023, and reporting in 2025 describes police making “more than 30 arrests a day” for offensive online messages [1] [2]. Available sources list the laws and categories most often used—malicious communications, harassment, racially/religiously aggravated harassment, hate crimes, threats, terrorism content and child sexual abuse material—while also showing debate about context, prosecutorial guidance and the Online Safety Act’s platform duties [3] [4] [5].
1. What counts as a “social media-related” offence: the statutory palette
UK reporting and guidance frame many prosecuted online acts as existing offences applied to social media, rather than wholly new crimes. The categories most frequently cited are malicious communications and harassment (including racially or religiously aggravated variants), communications that are “grossly offensive, indecent or menacing,” threats and incitement, hate crimes aggravated by protected characteristics, terrorism-related communications, and illegal sexual content such as child sexual abuse material [3] [4] [5]. Crown Prosecution Service (CPS) guidance and criminal-law practice notes explicitly map traditional offences onto behaviour on platforms—false or offensive profiles, VAWG (violence against women and girls) offences online, and hate-crime aggravations are included in those guidelines [4].
2. How often these laws are used: arrest figures and the debate over scale
Multiple outlets and parliamentary notices cite data suggesting roughly 12,000 arrests annually for online speech-related offences in recent years, and commentary in 2025 described police making “more than 30 arrests a day” for offensive online messages—figures cited in both The Times-based summaries and parliamentary debates [1] [2]. Local FOI data from a police force example (West Yorkshire) shows tens of thousands of arrests recorded against offence codes such as harassment, racially aggravated harassment and malicious communications over a given period, with only a subset explicitly tagged to platform keywords without manual review [3]. These numbers are driving political and civil‑liberties disputes over policing priorities and free-speech chilling effects [1] [2].
3. Two competing framings: public protection vs. free‑speech concern
Law‑makers and victims’ advocates argue that policing online communications protects people (including children) from grooming, threats, terrorism content and harm, and regulators have pressed platforms to remove child sexual abuse material, terrorism content and content encouraging suicide or fraud under the Online Safety Act [5] [6]. Critics—civil‑liberties groups, some parliamentarians and commentators—contend that vague offences criminalise “speech crimes” with no violence or identifiable victim, producing high arrest rates and possible overreach; parliamentary debates and columns raise free-speech emergency rhetoric and call for review [7] [2] [1].
4. Prosecutorial and policing limits: guidance that narrows prosecutions
The CPS and the Director of Public Prosecutions have issued guidance seeking to limit prosecutions for offensive communications to “extreme circumstances,” and new CPS guidelines set out factors to avoid prosecutions in some social‑media contexts while also detailing how existing offences apply online [8] [4]. That tension—between recorded arrests and falling conviction/sentencing numbers noted in Ministry of Justice statistics—highlights that many investigations do not proceed to conviction, and that charging decisions incorporate context and public‑interest tests [8].
5. Platform regulation vs. individual liability: two enforcement tracks
Since March 2025 Ofcom gained stronger powers under the Online Safety Act to require platforms to remove illegal categories of content (child sexual abuse material, terrorism content, hate crimes, content encouraging suicide, fraud), creating parallel obligations on companies to moderate at scale while policing and prosecution focus on individual communicators in many cases [5]. This bifurcation means a single post can generate platform takedowns, civil/regulatory action and/or criminal investigation depending on the content and the law invoked [5].
6. What reporting doesn’t settle: context, definitions and outcomes
Available sources document arrest counts and the statutory buckets used, but they stress important gaps: publicised arrest totals often mix different offence codes, include private messages as well as public posts, and may count suspected offences that never lead to charge or conviction with little standardisation across forces [3] [1] [9]. Detailed breakdowns of how many arrests related to a specific platform, the precise offence invoked, or how many resulted in conviction are not fully presented in the sources provided and therefore “not found in current reporting” here [3] [1].
7. Bottom line for readers: concrete offences to watch for
If you post or comment in the UK, the offences most commonly used in social‑media cases—per CPS guidance and policing data—are malicious communications, harassment (including racially/religiously aggravated forms), communications that are grossly offensive/indecent/menacing, threats and incitement, hate crimes, terrorism‑related messaging, and distribution of illegal sexual content; platform enforcement under the Online Safety Act also targets many of these illegal categories [4] [5] [3]. Debate continues over scope and proportionality, with prosecutors urging restraint in borderline “offensive” cases and civil‑liberties voices warning of chilling effects [8] [2].