Reject chat control
Executive summary
EU member states moved this month to a revised Child Sexual Abuse Regulation—widely nicknamed “Chat Control”—that removes a previous draft requirement for mandatory scanning of end-to-end encrypted messages but keeps a framework that allows voluntary automated scanning, age verification and new “risk mitigation” duties that privacy experts call dangerous (examples of the November 26 Council position and expert warnings) [1] [2]. Advocates and technologists warn the compromise still enables client‑side or AI‑driven scanning and could become effectively mandatory over time; defenders and some child‑safety groups say the deal strikes a balance to tackle online child abuse without forcing providers to universally break encryption [3] [4].
1. What changed: mandatory scanning removed, but the problem isn’t solved
On November 26, EU ambassadors approved a Council negotiating mandate that, according to multiple reports, dropped the explicit requirement to force apps like Signal and WhatsApp to scan encrypted messages, a concession credited with unblocking talks and allowing trilogue negotiations to begin [5] [3]. Many privacy advocates and technical experts call this a partial victory: the clause making device‑side or cloud scanning “mandatory” was removed, yet the Council text still allows voluntary scanning programs and introduces obligations framed as “risk mitigation” and age verification that critics say can be turned into de facto requirements [1] [6].
2. How the technology debate frames the stakes
Opponents point to client‑side scanning (CSS) and automatic AI detection as the technical mechanisms at issue: tools that would inspect users’ communications before or as messages are encrypted/decrypted. Leading security researchers and commentators warn CSS undermines end‑to‑end encryption, creates large attack surfaces and will produce many false positives that then require human review — with serious privacy and societal costs [4] [2]. Supporters of the Council compromise argue the text avoids forcing providers to break encryption while giving law‑enforcement tools to tackle child sexual abuse online [3].
3. Expert alarm: “high risks without clear benefits”
A coalition of cybersecurity and privacy academics has publicly warned the revised proposal still “brings high risks to society without clear benefits for children,” highlighting the expanded scope for automated text analysis of ambiguous “grooming” behaviours and the potential for false positives to cascade into surveillance and harms [2]. Independent commentators at research institutes like the Max Planck Institute have explained why voluntary CSS and age verification measures still threaten confidentiality for journalists, lawyers, activists and vulnerable communities [4] [7].
4. Political dynamics and the path ahead
The Council’s November decision follows earlier political defeats for mandatory scanning — Germany and others previously blocked a Council majority — and a public campaign that has involved industry, digital‑rights groups and some Member States [8] [9]. The Council position now moves to trilogue talks with the European Parliament and Commission; the Parliament’s earlier stance had been to remove encryption scanning entirely, creating a likely clash in negotiations [9] [10].
5. Claims of outright rejection of “Chat Control” are premature
Several outlets and advocacy blogs have framed recent steps as a wholesale rejection of Chat Control; others call it a victory for privacy because mandatory scanning was dropped [11] [1]. Available sources show the reality is mixed: the explicit mandatory scanning clause is gone from the Council text, but the regulation retains provisions that critics say permit voluntary scanning, AI detection and recurring reassessment that could reintroduce scanning pressure later [1] [6].
6. Two competing perspectives to report to readers
Digital‑rights groups and many technologists say the Council text still institutionalises surveillance techniques and sets a precedent other governments could copy; they frame the Council language on “risk mitigation” and voluntary scanning as a backdoor to mass monitoring [7] [6]. Child‑protection advocates and some Council negotiators present the compromise as pragmatic: it removes forced decryption while creating legal tools to detect CSAM, signalling willingness to protect children online without dismantling encryption [3] [9].
7. What to watch next
Trilogue talks between the Council, Parliament and Commission will determine the final shape of the regulation; watch for whether Parliament insists on excluding encrypted communications from scanning and whether Council language on voluntary scanning, age verification and three‑year reassessments survives or hardens into stronger obligations [9] [1]. Also monitor expert letters and civil society coordination: academics and privacy groups have already mobilised warnings that could influence lawmakers before any final vote [2] [4].
Limitations and sources: this analysis relies on recent reporting, council summaries and expert commentary provided in the supplied documents; it does not include material beyond the listed search results [6] [2] [1] [3] [4] [5] [9] [7].