If facebook doesn't find anything in non encrypted chats for 15 years, should i worry about chat control 2.0?

Checked on December 17, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

If Facebook (Meta) historically reported far fewer suspected child-exploitation cases after encrypting chats — a 6.9 million drop referenced by reporting — then EU “Chat Control 2.0” aims to expand detection beyond voluntary scans and could materially change how platforms handle private communications [1] [2]. EU member states reached a late‑November 2025 compromise that keeps the policy debate live and could lead to mandatory or differently scoped scanning if the Council vote and trilogue negotiations proceed [3] [4].

1. Why you’re worried: real decreases in platform reports after encryption

Public reporting shows Meta’s shift to stronger encryption correlated with a big decline in flagged incidents: when Meta began encrypting some messages in 2023 it flagged 6.9 million fewer suspected child‑exploitation cases to U.S. authorities than the year before, though it remained a top reporter [1]. That concrete change — fewer automated or server‑side matches in unencrypted traffic — is the technical and political starting point for arguments that encryption can limit detection of abuse.

2. What “Chat Control 2.0” would change: voluntary scanning → broader mandate (or pressure to scan)

The term “Chat Control” covers the EU’s proposed Child Sexual Abuse Regulation that would extend earlier voluntary scanning practices into a new regime; historically some providers (Gmail, Facebook/Instagram Messenger, Skype, Snapchat) voluntarily scanned unencrypted communications under existing rules [2] [4]. The 2025 political compromise reported on 26 November changed the direction of the proposal but did not end the process — trilogues would follow a Council vote and negotiators still aim to draft a final text [3] [4].

3. The scale and assumptions behind the proposal

Advocates in the Commission projected a large operational impact: the Commission expected a 3.5‑fold increase in scanning reports (a 354% rise) under a mandatory model, which demonstrates the regulatory intent to detect far more material than current voluntary systems uncover [2]. Critics — including a group of cybersecurity experts and digital‑rights advocates — argue the detection algorithms (for grooming and CSAM) are inaccurate and pose high societal risks with unproven benefits for minors [3].

4. Technical limits and industry pushback

Technologists and many European tech groups warn client‑side scanning or any mechanism that inspects encrypted endpoints weakens encryption and “undermines the security and integrity” of services, a central technical objection to mandated scanning inside end‑to‑end encrypted flows [2]. Past EU attempts failed or were delayed over privacy concerns; in October 2025 one version was shelved citing fundamental rights risks [1]. The political record shows persistent disagreement among member states and institutions [5].

5. Meta’s separate moves on AI chats and data use that complicate the picture

Separately, Meta announced policy and product changes in 2025 that expand how AI interactions are used: starting mid‑December 2025 Meta planned to use interactions with its AI features to personalize content and ads, and some reporting notes AI chats will be collected and analysed for AI systems — though Meta states private messages with friends/family are not used to train AIs unless users opt in [6] [7]. Those changes are distinct from EU Chat Control debates but feed public anxiety about what “scanning” means in practice [6] [7].

6. Two competing outlooks: safety advocates vs. privacy defenders

Safety proponents argue platform scanning (voluntary or mandated) is essential to catch predators and reduce CSAM distribution; the Commission’s projections show significant expected increases in detections [2]. Privacy advocates and technologists counter that broad or client‑side scanning risks mass surveillance, undermines encryption, and could create many false positives with social harms — a group of 18 cybersecurity experts explicitly warned of “high risks” and low demonstrated benefits [3] [2].

7. What individuals should realistically do now

Available sources do not offer a simple “does this apply to your private chats today?” answer because the legal text remains under negotiation and implementation would follow votes and trilogue drafting [3] [4]. In the interim, users should track the Council vote dates and Parliament debates, use end‑to‑end encrypted “Private Mode” features when available, and watch product privacy controls — Meta’s AI/data settings were explicitly updated in late‑2025 and offer some user controls [8] [6].

8. Bottom line — should you worry?

Yes, the risk is substantive: encryption reduced platform‑reported incidents and the EU is actively pursuing new rules to close that gap; but the final shape of Chat Control 2.0 was not settled at the time of these reports and has strong institutional opposition and technical limits [1] [3] [2]. Follow the Council votes and trilogue texts closely — those will determine whether the political compromise becomes binding law or remains a contested policy space [3] [4].

Want to dive deeper?
What is chat control 2.0 and how does it change data access for tech platforms?
How do end-to-end encryption and server-side scanning differ under new chat-control proposals?
If platforms delete non-encrypted chat logs after 15 years, what legal or technical retention rules still apply?
Which countries have implemented or rejected chat-control-style laws and what were the outcomes?
What practical steps can users take now to protect private conversations from future surveillance policies?