Do you think chat control 2.0 will have retroactive scanning or is it too risky?

Checked on December 17, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

Available reporting shows the EU’s latest “Chat Control 2.0” effort dropped an explicit duty to break end‑to‑end encryption and mandatory on‑device scanning, but it preserves mechanisms that critics say could encourage voluntary or indirect scanning and broad age‑verification measures (see [4], [2], p1_s3). Whether retroactive scanning will be permitted is not directly stated in these sources; reporting focuses on a political compromise reached in late November 2025 and on concerns that the new text merely repackages earlier measures [1] [2].

1. A comeback, repackaged: how negotiators rewrote the script

Multiple outlets describe Chat Control 2.0 as a revised attempt to achieve earlier goals after prior defeats: the Danish Council presidency reworked text in late 2025 that removes the most overt encryption‑breaking mandate but keeps provisions that critics argue revive the substance in different language [3] [2]. Observers note the Council reached a political agreement on a “softer compromise” on 26 November 2025, shifting the debate from an explicit scanning obligation to other obligations and incentives for providers [1].

2. What the new text removed — and what remains

Advocates of privacy and some civil‑liberties groups hail the removal of a forced requirement to scan encrypted messages as a win; the Electronic Frontier Foundation reports the most controversial forced scanning element is “out” of the Council’s position [4]. But critics — including campaigners and watchdogs cited by multiple outlets — warn the draft still encourages voluntary mass scanning, embeds risk‑mitigating duties that can be interpreted as creating de facto pressure to scan, and adds age‑verification and moderation obligations that widen state or corporate scrutiny of private content [2] [5] [1].

3. Retroactive scanning: not spelled out in current reporting

None of the provided sources explicitly describes a retroactive‑scanning clause — i.e., a provision requiring providers to scan previously stored messages or backups after the law’s adoption (available sources do not mention retroactive scanning). The coverage concentrates on whether scanning will be mandatory going forward, whether encryption must be bypassed, and whether “voluntary” scanning becomes normalized [4] [2] [1].

4. Why retroactive scanning would be legally and politically fraught

Sources emphasise legal, technical and political pushback against generalised scanning. The European Parliament and data‑protection bodies asked whether mass scanning complies with fundamental rights, and civil society warned of chilling effects and misuse; these debates frame why retroactive scanning would be highly controversial even if technically proposed [6] [1] [7]. The EDPB/EDPS argument is that broad scanning risks creating “de facto generalized and indiscriminate scanning” [8].

5. The practical pathway for retroactive measures — how they could appear

While outright retroactive scanning isn’t described in the reporting, several sources show how softer mechanisms can produce similar results: obligations to take “all appropriate risk‑mitigation measures,” “voluntary” scanning norms, or mandatory age verification and upload‑moderation regimes could push providers to expand scanning tools to stored content for operational or liability reasons, effectively making retrospective checks more likely [8] [2] [1].

6. Competing narratives: safety vs. security of systems

Supporters argue stronger detection reduces child sexual abuse material and increases reporting; some industry and Member State actors pushed for tougher measures during Council work [9]. Privacy advocates, technologists and civil‑society groups counter that client‑side scanning or retroactive checks undermine encryption, weaken digital security, and set precedents for misuse — a debate reflected across the sources [7] [10] [11].

7. What to watch next (and why it matters)

The next stages — Council adoption, trilogue talks with Parliament and Commission, and amendments — determine whether language that permits pressure on providers survives or is tightened [7] [4]. If the final text preserves vague “risk mitigation” duties or normalises voluntary scanning, providers may choose technical or policy paths (including scanning stored content) to limit liability or comply with age checks; if Parliament presses for targeted, judicially supervised measures, retroactive or indiscriminate scanning risks may be reduced [9] [6].

Limitations: reporting in these sources describes political positions, objections and draft language, not final legal clauses about retroactivity; available sources do not specify an actual retroactive‑scanning mandate (available sources do not mention retroactive scanning).

Want to dive deeper?
What is chat control 2.0 and which governments are proposing it?
Would retroactive scanning of private messages violate existing privacy laws like the GDPR?
What technical methods could enable retroactive scanning and their false positive rates?
What legal and political risks could force policymakers to avoid retroactive scanning?
How have courts and civil society groups responded to previous chat-control proposals?