What are the specific age‑verification requirements under the UK Online Safety Act and how have other platforms complied?

Checked on January 17, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

The Online Safety Act requires online services that host pornography or other defined “primary priority” harmful content to implement “highly effective” age assurance—using age verification, age estimation, or both—to keep under‑18s from accessing restricted material, with Ofcom setting the codes and enforcement timetable (including the July 25, 2025 roll‑out) [1] [2] [3]. Platforms that fail to meet these duties face substantial sanctions—fines up to £18 million or 10% of global turnover—while Ofcom and the ICO are positioned to oversee safety and data‑protection compliance respectively [4] [5] [2].

1. What the law actually demands: “highly effective” age assurance, risk assessments and categorisation

The Act does not mandate one specific technical tool but imposes statutory duties: services that allow pornography or other Primary Priority Content must prevent children from normally encountering it by implementing “highly effective” age assurance measures, carrying out children’s access or risk assessments and applying Ofcom’s codes of practice according to service category [1] [2] [6].

2. The toolbox Ofcom expects: verification, estimation, and “double anonymity”

Ofcom’s approach is technology‑neutral: age assurance can be achieved through age verification (ID checks such as photo ID), age estimation (for example AI facial age estimation), or combinations and “step‑up” checks; guidance contemplates credit‑card checks, ID matching and selfie‑based estimation, and recommends privacy safeguards including minimal data retention and “double anonymity” involving third‑party providers where appropriate [1] [3] [7] [2].

3. Scope, dates and enforcement levers

Many of the Act’s age‑assurance duties came into force in mid‑2025 with Ofcom’s codes taking effect by the July 25, 2025 implementation milestone; Ofcom can fine or require access restrictions and has already signalled it will use both procedural and substantive powers in enforcement actions [3] [2] [1]. Legal summaries and industry guides reiterate the phased rollout and Ofcom’s central regulatory role [6] [7].

4. How platforms actually complied — a patchwork of approaches

Large platforms adopted varied responses: some built in in‑app ID verification or facial age estimation, with services such as Discord describing blurred ID handling and combining methods when confidence is low [8]. Other platforms implemented age gates, default safety settings for verified minors, or required UK users to complete age‑assurance flows before accessing age‑restricted material [1] [8]. Some smaller sites avoided compliance by blocking UK traffic or applying lighter checks, and a mix of industry vendors (Yoti, AgeChecked, commercial KYC providers) supplied the third‑party infrastructure used by many services [9] [7] [10].

5. Evidence of friction and backlash

The roll‑out produced immediate friction: prominent services from social networks to media apps required users to show ID or complete biometric checks, prompting user complaints and media criticism about privacy, overbreadth and unintended access limitations—coverage cited Wikipedia’s legal fight about category classification, and criticism that the law’s breadth has produced a Streisand‑effect by naming sites and driving attention to where age checks were required [11] [12] [10]. Industry observers warned about adult user resistance to sharing identity data and potential shifts to VPNs or non‑compliant sites [13] [11] [4].

6. Privacy trade‑offs, third‑party reliance and enforcement incentives

Because Ofcom and data‑protection rules require privacy safeguards but the law pushes for “highly effective” checks, platforms often turned to third‑party age‑assurance vendors and “double anonymity” models to separate identity proofing from publisher access decisions—an approach intended to reduce aggregator liability but that increases concentration in safety‑tech suppliers and raises questions about data reuse and auditability overseen by the ICO [4] [2] [5].

7. What remains uncertain and what to watch

Assessments of effectiveness and proportionality are ongoing: Ofcom’s future Age Assurance statutory report, expected follow‑up enforcement, and litigation (for instance around category classification) will clarify whether the mix of ID, biometric estimation and third‑party architectures actually blocks under‑18 access without unacceptable privacy harms; current reporting details the rules and early compliance models but cannot yet conclusively prove long‑term outcomes [2] [1] [12].

Want to dive deeper?
How does Ofcom’s ‘double anonymity’ model work in practice and which third‑party age‑assurance vendors are certified?
What are the main privacy and data‑protection criticisms raised by civil‑liberties groups about the Online Safety Act’s age verification rules?
How have enforcement actions and fines under the Online Safety Act been applied against platforms since July 2025?