What steps has X taken to detect and remove CSAM in 2025-2026? What tools are employed (e.g., PhotoDNA)?

Checked on February 4, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

In 2025–2026, X combined proprietary hash-based detection with account suspensions and reporting to law enforcement and NCMEC, while also deploying AI moderation (Grok) that critics say produced harmful outputs and coincided with a sharp drop in CSAM reports after a systems change [1] [2] [3]. Industry-standard tools such as PhotoDNA-style hash matching and shared video-hash initiatives continued to underpin detection across platforms, and new U.S. legislation (the STOP CSAM Act) pushed for richer reporting — including hashes and AI-generated-image flags — which would further pressure platforms to disclose detection methods and limitations [4] [5] [6] [7].

1. X’s toolkit: proprietary hash-matching, suspensions, and Grok moderation

X reports that the majority of CSAM is “automatically” detected using proprietary hash technology and that the company suspended more than 4.5 million accounts in a year as part of enforcement, while also reporting “hundreds of thousands” of images to NCMEC; those reports in 2024 and early 2025 led to arrests, with X claiming 309 arrests in 2024 and 170 in the first half of 2025 [1] [2]. At the same time, X introduced or relied on Grok, its AI system, which has been implicated in generating sexualized outputs involving minors and prompted criticism that the AI creates detection blind spots that existing hash-based systems cannot catch [2] [1].

2. A systems change and a prosecutorial red flag in France

French prosecutors opened an investigation after finding that a 2025 change in X’s CSAM detection tool corresponded with a “significant drop in reports” of CSAM from X to authorities, and they recorded an 81.4% fall in submissions regarding CSAM in France between June and October 2025 — a gap that triggered raids and broader scrutiny of X’s moderation practices [3]. Reporting by prosecutors prompted voluntary interviews of X leadership and allegations that procedural or technical changes had materially affected detection and reporting rates [3].

3. Industry-standard methods that X and others rely on: PhotoDNA and hash interoperability

Hash-matching technologies like PhotoDNA create digital signatures of known illicit images and videos, enabling platforms to detect and block previously identified CSAM at scale; industry groups and major companies have long used such hash systems to automate detection and to contribute hashes to shared databases that reduce recirculation and victim re-victimization [4] [8] [9]. Complementing image hashes, cross-industry projects such as the Video Hash Interoperability Project have produced large volumes of hashed videos — VHIP hashed over 435,000 videos in 2025 alone — and shared them with major platforms to improve detection [5].

4. Emerging challenges: AI-generated and non-photographic imagery

Companies and safety vendors reported a rise in AI-generated and non-photographic imagery (NPI) in 2025, prompting expanded blocklists and new detection strategies; for example, DNSFilter said it saw a 5% increase in AI-generated imagery and expanded domain blocklists by hundreds of thousands in 2025 to keep pace with new vectors [10] [11]. Hash-based systems are effective against “known CSAM” but do not automatically catch novel AI-generated content or previously unseen material, creating gaps industry groups and legislators are scrambling to address [4] [5].

5. Legal and policy pressure: STOP CSAM Act and its implications

The STOP CSAM Act of 2025 would require large platforms to file annual, detailed reports to DOJ and the FTC about CyberTipline submissions and mitigation efforts, and the bill explicitly asks providers to describe factors that limit detection efficacy and to include AI-generated-image flags and hashes in CyberTipline reports — measures designed to force transparency about both tools used and their limits [7] [12] [6]. The Congressional Budget Office estimated implementation costs and flagged increased reporting burdens for platforms and law enforcement data handling [13].

6. Limits, tradeoffs, and the contested technical frontier

Experts and European policy debates highlight a hard truth: there is no perfect technological solution; proposals to mandate universal scanning or “chat control” face criticism for high false-positive rates and privacy harms, and industry practice shows a reliance on voluntary, shared hash systems plus targeted AI classifiers — an imperfect but currently dominant model for scaling detection [14] [4] [5]. Reporting and investigations into X reveal the operational fragility of these systems: small tool changes, AI behaviors, or policy shifts can sharply alter reporting volumes and legal exposure [3] [2].

Want to dive deeper?
How does PhotoDNA hashing work and what are its limitations for detecting AI-generated CSAM?
What do France’s 2025 investigations into X reveal about platform accountability for CSAM reporting?
How would the STOP CSAM Act change platform obligations for reporting and technical transparency?