Facebook messenger, Meta and chat control 2.0. Will they use retroactive scanning?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
Meta will start using interactions with its “Meta AI” features for personalization and ads from December 16, 2025, but the company and multiple press outlets say this applies to AI interactions, not to ordinary private DMs unless a user opts in or shares them with Meta AI [1] [2]. Separately, EU “Chat Control 2.0” debate has revived proposals that would expand message-scanning powers for platforms and governments; campaigners and privacy groups warn this could enable wide scanning or client-side measures even where encryption exists [3] [4] [5].
1. What Meta announced — a narrow change to AI-chat data use
Meta’s October announcements and reporting show the company will begin using interactions with its Meta AI (text and voice chats with the assistant) as signals to personalize content and ads across Facebook, Instagram, Messenger and WhatsApp starting December 16, 2025 [1] [6]. Coverage from privacy-focused outlets and vendors (Proton, Social Media Today) underscores Meta’s framing: this update expands use of AI-chat interactions specifically and does not, Meta says, mean wholesale scanning of regular private DMs for AI training without user choice [7] [2].
2. What people fear — conflation between AI-chat signals and general message scanning
Public confusion is widespread because “using AI-chat interactions” sounds similar to “reading private messages.” Several outlets and civil-society pieces caution that while Meta’s stated policy targets Meta AI interactions, people often assume all private messages are now scanned or repurposed. Social Media Today explicitly counters viral claims that Meta will scan private DMs for AI training, reiterating Meta’s line that private messages are not used unless shared with the AI [2]. Independent reporting and blogs note the policy shift increases the volume of conversational data Meta can use for personalization, which raises privacy concerns even if limited to AI-chat content [1] [7].
3. EU Chat Control 2.0 — a parallel policy fight that could change platform obligations
Separately, the EU’s renewed Chat Control push — sometimes called “Chat Control 2.0” — is described in multiple sources as a reboot of earlier proposals to mandate message scanning and age verification across services. Near Future and other watchdogs say the Danish EU presidency reworked the text and that COREPER approved a revised draft on November 26, 2025, raising alarms that the substance may amount to mandatory or de facto scanning, even where encryption had previously been protected [3] [5]. The Electronic Frontier Foundation documents that earlier iterations sought widespread scanning and that the debate remains politically contentious [4] [8].
4. How Chat Control 2.0 might interact with platforms like Meta
Available sources indicate two relevant dynamics: first, EU legislative pressure can compel platforms to adopt scanning or client-side measures to meet legal obligations [5]. Second, civil-society actors argue the revised proposals are a repackaging — for example, replacing “forced” scanning language with “voluntary” scans backed by new requirements (age checks, voluntary scanning regimes) that could still produce large-scale surveillance outcomes [3] [5]. Sources do not provide a finalized legal text or an explicit statement that Meta will perform retroactive scanning of stored messages; that claim is not found in current reporting.
5. Retroactive scanning — what reporting does and does not show
None of the provided sources assert that Meta will retroactively scan all past private messages across its apps as part of the December AI update. Sources focus on future use of AI-chat interactions and on EU legislative proposals that might force or encourage additional scanning capabilities [1] [2] [3]. Reporting from privacy advocates and watchdogs warns that if laws like Chat Control 2.0 pass in a form that requires or incentivizes client-side or provider-side scanning, platforms could be obliged to deploy scanning tools — but those sources do not document a present, company-declared program of retroactive scans by Meta [4] [5].
6. Competing perspectives and hidden incentives
Meta and allied coverage emphasize the limited scope: AI-chat data only, not blanket DM access, framed as product personalization and ad relevance [1] [2]. Privacy groups and tech-privacy outlets frame the EU process and Meta’s policy change as part of a troubling trend: more data collection justified by “safety” or “personalization,” while legislative drafts could quietly expand surveillance powers [3] [4] [5]. Note the actors’ incentives: Meta benefits commercially from richer personalization signals; EU states and some member-state officials have political incentives to show action on child protection and crime, which can expand surveillance authorizations; privacy NGOs have advocacy incentives to highlight broad harms [3] [4] [5].
7. What to watch next
Watch for (a) full text of any final EU Chat Control law or amendments emerging from trilogue negotiations after COREPER, since the precise obligations determine whether platforms must scan content [3] [5]; (b) any formal Meta disclosures or policy language about retroactive processing of stored messages — current reporting lists only forward-looking use of Meta AI chat interactions [1] [2]; and (c) regulatory or litigation challenges from privacy groups that will shape implementation [4] [8].
Limitations: reporting cited here is fragmented across advocacy outlets, tech press and watchdogs; available sources do not mention a finalized EU law text forcing retroactive scans nor a Meta admission that it will retroactively scan stored private DMs [3] [1] [4] [5] [2].