What exactly does the EU Chat Control proposal require platforms to scan and for how long must data be retained?
Executive summary
The current EU “Chat Control” debate centers on a regulation to prevent and combat child sexual abuse that would require providers to detect and act on CSAM and grooming-related material across many services; earlier drafts included mandatory scanning of private and encrypted messages but the Council compromise removed an explicit mandatory scan of end‑to‑end encrypted content [1] [2]. The proposal still would require providers to detect illegal images, URLs and text, report and remove CSAM, and imposes stronger obligations on platforms to adopt mitigation measures — with critics warning this effectively incentivises broad scanning and record‑keeping despite the Council text softening the language on mandatory interception [1] [2] [3].
1. What the proposal tells platforms to scan — the explicit lists
The presidency compromise and commentaries describe a duty for a wide set of online actors (hosting services, interpersonal communications services, app stores, ISPs and search engines) to detect illegal content including images, URLs and text linked to child sexual abuse, and to report and take down such content when found [1]. The Commission’s original framing gave law enforcement the ability to request detection under so‑called detection orders; that language aimed at detecting CSAM and grooming across private communications and other services [2] [1]. Parliamentary questions and civil‑society sites summarise the functional aims as automated detection of images, text and links indicative of CSAM and grooming [4] [5].
2. Encryption and “mandatory scanning”: what changed and what remains contested
Early, highly contested versions sought mandatory scanning that would reach encrypted messages — including proposals for client‑side scanning before encryption — which critics called an encryption‑breaking approach [1] [6]. Multiple sources report the Council position has since dropped an explicit mandatory requirement to scan end‑to‑end encrypted communications, instead shifting toward obligations for providers to adopt “appropriate mitigation measures” which critics say will create powerful incentives to scan anyway [2] [7] [3]. Civil‑liberties groups and tech actors argue that removing explicit mandatory scanning does not eliminate pressure on providers to implement scanning if regulators demand risk mitigation [2] [3].
3. How the proposal would operate technically — automated tools, client‑side scanning debate
The proposal foresees use of automated content‑analysis tools to detect CSAM in images, text and URLs; early drafts contemplated client‑side scanning (analysis on a user’s device before encryption) as a way to reach encrypted traffic — a point that alarmed privacy advocates who warned of false positives and structural surveillance risks [1] [8]. The European Parliament and NGOs have repeatedly highlighted the high false‑positive rates of automated tools and raised proportionality and fundamental‑rights concerns [8] [4].
4. Data retention and record‑keeping: what sources say — gaps and warnings
Sources emphasise the regulation would increase reporting and removal obligations and create stronger record‑keeping incentives, but the precise, uniform retention period for scanned data is not clearly specified in the public summaries and commentaries provided here. Critics say the measure promotes “record‑keeping” and could produce a large increase in scanning reports (a reported 3.5‑fold increase cited by critics), but available sources do not provide a single definitive retention duration across the text [9] [10]. The Council text reportedly focuses on mitigation duties rather than prescribing a fixed retention length in the publicly cited summaries [2] [3]. Therefore: not found in current reporting — there is no clear, universally cited retention period in these sources.
5. Stakes, numbers and real‑world effects claimed by proponents and critics
Proponents frame the regulation as closing a gap that allowed online child sexual abuse to persist and want broader detection, reporting and takedown powers [11]. Critics — digital‑rights NGOs, Parliament questioners and some technical commentators — warn of mass surveillance, erosion of end‑to‑end encryption, a surge of false positives and legal ambiguity that could force companies into invasive scanning or to exit EU markets [6] [4] [1]. One critic claims the Commission expected a 354% increase in scanning reports under a mandatory regime [10].
6. Where the uncertainty matters: legal text vs political compromise
Multiple sources show the proposal has evolved through presidencies and political compromise: Parliament in 2023 narrowed detection orders and protected encryption; the Council later removed explicit mandatory scanning but strengthened mitigation duties — a shift that creates practical ambiguity about whether scanning will become de‑facto required [12] [2]. Watchpoints are the final co‑legislators’ compromise, any delegated acts that define “mitigation measures,” and whether future guidance or enforcement will effectively require client‑side or server‑side scanning [2] [3].
Conclusion: the explicit obligations in the text focus on detecting and acting on CSAM (images, URLs, text) across many online services [1]. Whether that will legally or practically translate into mandatory, system‑wide scanning of encrypted private messages — and what exact data retention periods would apply — remains contested in public reporting; the most authoritative summaries here show the Council backed away from an explicit encryption‑scanning mandate but critics say the compromise still incentivises wide‑scale scanning and record‑keeping [2] [3].