How do different countries' standards vary for acting on automated CSAM detection by platforms?
Executive summary
Countries vary sharply on whether platforms must actively scan user communications for child sexual abuse material (CSAM). The European Union moved from proposals that could have mandated universal scanning and client-side interception toward a Council position that drops compulsory message scanning and excludes end-to-end encryption from mandatory breaking, leaving risk assessments, voluntary detection and judicial orders as the main levers [1] [2] [3].
1. Policy fault-lines: mandatory scanning vs. voluntary detection
Since 2022 the European Commission pushed a framework that could require automated detection by services, but sustained political and civil-society pushback forced a retreat: by November 2025 the Council position removes a blanket obligation to scan communications and instead permits—but does not force—providers to carry out scanning and mitigation measures [4] [1] [2]. Industry and some EU bodies still argue voluntary detection is inadequate; that argument underpins earlier Commission text and the 2021 interim derogation that allowed voluntary scanning under strict safeguards [5] [6] [4].
2. Encryption is the political and technical flashpoint
The most controversial element of proposals has been any measure that would bypass or undermine end-to-end encryption. Early drafts sought client-side detection that critics called “chat control” because it could require scanning inside encrypted messengers; later Council language explicitly excludes forcing providers to break encryption while leaving other obligations and orders on the table [4] [1] [3].
3. Legal architecture: risk assessments, orders, and a European Centre
The EU’s draft CSAM Regulation envisaged a horizontal regime: mandatory risk assessments for hosting and interpersonal communications services, mitigation obligations, a European Centre to coordinate indicator databases and accept reports, and judicially issued detection/removal/blocking orders that could be tailored to providers [3] [4]. The final balance between voluntary industry practice and state-enforceable orders remains to be negotiated between Parliament and Council; the Council’s removal of mandatory scanning does not remove the broader powers to require mitigation measures or issue orders [2] [3].
4. Tools platforms actually use — and their limits
In practice platforms mostly rely on hash-based techniques (PhotoDNA, MD5, PDQ, CSAI Match) to match known CSAM and on layered detection workflows that include human review; these tools detect previously identified content very effectively but cannot find novel material or subtle grooming without additional behavioural or automated analysis [7] [8] [9]. Reports to hotlines and law enforcement remain the operational end-point for detected matches [7] [3].
5. Cross-national differences beyond the EU: reporting duties and legislation
Global reviews show large variation in domestic law: some states require ISPs to report suspected CSAM to authorities, others leave much of the work to voluntary industry practice; international surveys and model‑law efforts document these divergences and show many countries have enacted anti‑CSAM criminal law while the operational requirements for platforms differ widely [10] [11]. Available sources do not provide a complete country-by-country breakdown in this dataset and national approaches outside the EU are not enumerated here (“not found in current reporting”).
6. Public attitudes and institutional trust shape national choices
Scholarly work on Europeans’ privacy calculus finds major cross‑country variation driven by institutional trust, historical experiences with surveillance, and demographic differences: younger people tend to prioritise privacy more, and Central and Eastern European countries show higher privacy concerns that complicate uniform policy design [12]. Those findings explain why a one-size-fits-all legal mandate is politically fraught even inside the EU.
7. Competing narratives and stakeholders’ agendas
Proponents — including parts of the Commission and child‑protection groups — frame stronger rules as necessary because voluntary measures allegedly leave gaps. Opponents — privacy advocates, civil‑society groups and some governments — warn that mandatory scanning or client‑side measures risk mass surveillance and weaken encryption [5] [2]. Industry voices emphasise existing voluntary tools and technical limits, while hotlines and enforcement bodies push for better coordination and legal clarity [7] [3].
8. What to watch next
Negotiations between the European Parliament and the Council will determine final scope: whether risk‑mitigation language is interpreted as de facto obligation to scan; how the European Centre will operate; and whether judicial detection orders are narrowly targeted or broadly applicable [3] [2]. Press coverage indicates the Council has struck a compromise removing mandatory scanning for now, but the final law could still impose strong mitigation duties and create powerful supervisory mechanisms [2] [1].
Limitations: this analysis is based solely on the supplied sources and focuses on EU developments and broad international reporting; a detailed, country‑by‑country map outside the EU is not present in these documents (“not found in current reporting”).