X company NCMEC reports in 2025 for grok AI-generated CSAM

Checked on January 2, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

NCMEC’s CyberTipline recorded an unprecedented surge in reports classified as involving generative AI and child sexual abuse material (CSAM) in 2025, with multiple advocacy and platform sources citing first-half figures in the hundreds of thousands—figures that vary across reporting but point to an enormous spike [1] [2] [3]. Public company disclosures show at least one major AI provider (OpenAI) sharply increased its filings to NCMEC in the first half of 2025, but none of the supplied reporting confirms that a company named “X” or a model called “Grok” specifically filed NCMEC reports in 2025; that absence should be treated as a limit in the available records rather than proof of no reporting [4] [5].

1. The scale: mid‑2025 numbers and why they diverge

NCMEC-related reporting in 2025 was described as a “wake-up call,” with midyear statistics released to communicate a fast-moving threat; multiple outlets and NGOs cite first-half 2025 AI‑CSAM reports in the range of roughly 440,000–485,000, up from tens of thousands in 2024 and only a few thousand in 2023, illustrating both massive growth and some inconsistency in counts across organizations [1] [2] [3] [6]. Thorn summarizes NCMEC figures as jumping from 6,835 in 2024 to 440,419 in the first half of 2025 [1], WTOC and other outlets report similar midyear totals and trace earlier year-over-year increases [2], while advocacy groups cite slightly different totals—differences that reflect timing, the way “AI‑generated” is defined, and whether platforms flagged items as AI-produced when submitting CyberTipline reports [7] [3].

2. Platform disclosures and what they reveal — OpenAI as a documented example

At least one large AI platform publicly acknowledged sharply increased reporting to NCMEC: OpenAI said it sent roughly 80 times as many child‑exploitation incident reports in the first half of 2025 as in the same period of 2024, and that it reports confirmed CSAM to NCMEC and bans accounts tied to such content [4] [5]. That disclosure demonstrates how provider transparency can identify a corporate reporting role, but it is the exception in the record supplied: many platforms may detect and report CSAM yet not consistently label whether content is AI‑generated when they submit to NCMEC, complicating efforts to attribute source or measure the subset that is generative‑AI produced [7] [8].

3. Why attribution and labeling matter — and why they’re inconsistent

Researchers and platform staff interviewed in recent studies find mainstream platforms report CSAM to the CyberTipline but do not systematically mark items as AI‑generated, leaving identification to NCMEC and law enforcement and producing wide variation in public tallies [7] [8]. Tech policy commentators and civil‑society groups warn that reporting rules and technical limits—combined with differences in how “AI‑generated” is defined—produce reporting artifacts rather than a single canonical count [7] [8].

4. Law, policy, and the push for clearer accountability

Congressional and advocacy efforts in 2024–2025 sought to tighten obligations on platforms—proposals like the STOP CSAM Act and the ENFORCE Act aim to create clearer reporting requirements, require better retention and labeling of CyberTipline submissions, and close legal gaps around AI‑produced CSAM; Thorn and other NGOs explicitly supported updating federal law to address the AI angle [1] [9] [8]. These policy moves respond directly to NCMEC’s testimony that the CyberTipline saw extraordinary volumes and to research showing shorter retention windows and inconsistent labeling hampered investigations [10] [11].

5. What the available reporting does and does not say about “X” company or Grok

The sources confirm a dramatic increase in AI‑related CSAM reports to NCMEC in 2025 and that at least some major AI firms (notably OpenAI) substantially increased their CyberTipline filings [1] [4] [5]. None of the provided materials, however, contains a direct statement that a company named “X” or a product called “Grok” submitted NCMEC reports in 2025; the absence of that explicit attribution in the supplied reporting is a reporting limitation, not evidence of non‑action. To determine whether a specific firm or model filed reports, one must consult that company’s transparency disclosures and NCMEC’s public or searchable CyberTipline summaries or request confirmation from the company and NCMEC [4] [7].

Want to dive deeper?
How does NCMEC classify and publish CyberTipline data on AI‑generated CSAM?
Which AI companies published 2025 transparency reports detailing NCMEC filings and what did they disclose?
How do proposed laws like the STOP CSAM Act and ENFORCE Act change platform reporting and labeling requirements for AI‑generated CSAM?