How does the National Center for Missing & Exploited Children handle reports of AI‑generated CSAM and which companies filed reports in 2024–2025?

Checked on January 2, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

The National Center for Missing & Exploited Children (NCMEC) treats AI‑generated child sexual abuse material (AIG‑CSAM) as CSAM, routing reports through its CyberTipline, attempting to triage, identify locations and victims, and refer matters to law enforcement while relying on industry partners for detection and metadata [1] [2] [3]. Reporting from platforms has surged: NCMEC recorded 4,700 generative‑AI‑related reports in 2023, about 67,000 in 2024, and explosive mid‑2025 tallies reported by multiple outlets ranging from roughly 440,000 to 485,000 AIG‑CSAM reports, and at least one major company—OpenAI—has publicly disclosed the volume of its CyberTipline submissions in 2024–2025 [4] [5] [6] [7] [8].

1. How NCMEC legally and operationally treats AI‑generated CSAM

NCMEC’s public stance is categorical: generative AI CSAM qualifies as CSAM and should be processed as such, meaning materials—regardless of whether they depict a real child—enter the CyberTipline workflow, are analyzed by NCMEC units like the Child Victim Identification Program (CVIP), and can prompt location and victim‑identification efforts and referrals to U.S. and international law enforcement [1] [3] [2]. That workflow relies on file hashing, metadata, and triage; NCMEC encourages platforms to supply complete report metadata through reporting APIs so its analysts can prioritize leads that may lead to real‑world abuse [9] [10].

2. The technical chain: detection, hashing, and third‑party tools

Because platforms and investigators cannot always tell whether content is AI‑generated from the content alone, NCMEC and partners use a mix of content‑hashing (to block known CSAM), specialized detectors for AI origin, and vendor tools—some developed with child‑safety nonprofits—to automate filtering and reporting; the government and NGOs have contracted or partnered with private detection firms (like Hive) and nonprofits (like Thorn) to scale identification efforts [9] [3]. Independent researchers and Stanford’s Internet Observatory caution that the CyberTipline’s ingestion and triage systems must be modernized—particularly the platform reporting API fields—because the volume and nature of AIG‑CSAM can overwhelm human analysts and impede effective followup [10] [2].

3. The scale: year‑over‑year surges and conflicting mid‑2025 counts

NCMEC and allied organizations report rapid escalation: the CyberTipline’s recorded generative‑AI reports rose from roughly 4,700 in 2023 to about 67,000 in 2024—a 1,325 percent jump on some tallies—and multiple mid‑2025 updates put first‑half AIG‑CSAM reports in the hundreds of thousands, with sources citing figures such as ~440,419 and 485,000 for H1 2025 [4] [8] [7] [6]. Different outlets and advocacy groups report slightly different mid‑year totals depending on cutoffs and definitions (still images vs. videos), illustrating both the scale and the data‑clarity problems facing researchers and policymakers [6] [7].

4. Which companies filed reports in 2024–2025 (what is known and what is not)

OpenAI has publicly disclosed a dramatic increase in CyberTipline reports: roughly 947 reports covering 3,252 pieces of content in the first half of 2024 versus about 75,027 reports covering roughly 74,559 pieces of content in the first half of 2025—an approximately 80‑fold increase by the company’s own update [8] [11]. Beyond OpenAI, reporting notes that large AI labs and online platforms (for example Google) publish transparency statistics about NCMEC reports but often stop short of specifying how many are AI‑related, so public records do not provide a comprehensive, named list of all companies that filed AI‑specific CyberTipline reports in 2024–2025 [8]. Federal law requires U.S. platforms to report CSAM to NCMEC, and industry‑level reporting and vendor disclosures suggest multiple platforms and detection vendors are involved, but the sources supplied do not enumerate other corporate filers by name [2] [9].

5. Tensions, limitations and the path forward

The system is straining: nonprofits, academics and industry warn that the sheer volume of AIG‑CSAM risks drowning investigators and that reporting standards, APIs and automation must improve to preserve the CyberTipline’s investigative value; at the same time, vendors and platforms are racing to produce detection tools and transparency reports, and lawmakers have introduced or backed legislation (e.g., the ENFORCE Act) to tighten legal clarity around AI‑generated CSAM [10] [7] [9]. Public data confirm the trend and one named company’s ramped reporting, but the sources do not provide a full, verified roster of every corporate filer in 2024–2025, so conclusions about the complete set of reporting companies must remain qualified by the limits of available disclosures [8] [2].

Want to dive deeper?
What technical methods do platforms use to detect AI‑generated images and how accurate are they?
How has legislation like the ENFORCE Act changed prosecutorial tools for AI‑generated CSAM?
Which private vendors (e.g., Hive, Thorn) supply detection and hashing services to NCMEC and how are those contracts structured?