Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

Has any federal prosecution charged creators of AI-generated child sexual images since 2020 or 2023?

Checked on November 9, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

Federal prosecutors have charged at least some creators of AI‑generated child sexual images in recent years: multiple sources document federal indictments and prosecutions after 2023, and several analyses identify specific federal cases treating AI‑generated child sexual abuse material as prosecutable under existing statutes. The clearest, well‑documented federal prosecutions cited in the materials include indictments in 2024 and prosecutions described in 2024–2025 reporting, though the evidence for prosecutions stretching back to 2020 is weaker and not consistently documented across the sources [1] [2] [3] [4].

1. What claimants assert and what the documents say about prosecutions

The collected analyses make two central factual claims: first, that the Department of Justice has unsealed federal indictments charging individuals for creating AI‑generated child sexual abuse material; second, that at least one such prosecution may represent the first federal case focused on wholly AI‑generated imagery. The reporting identifies named defendants and federal filings—for example, a 2024 indictment against Steven Anderegg charging production, distribution, and possession of AI‑generated CSAM created with Stable Diffusion, and other federal matters discussed in 2024–2025 coverage [1] [2]. One source frames these as among the earliest federal actions targeting generative‑AI CSAM, signaling a prosecutorial shift toward treating synthetically produced material equivalently to traditional CSAM under federal law [5] [1].

2. Concrete cases: who was charged and when prosecutors moved

The sources converge on several concrete prosecutions. Reporting cites a DOJ indictment in May 2024 against Steven Anderegg for creating and distributing AI‑generated child sexual images; coverage characterizes this as possibly the first federal case premised on purely AI‑generated imagery [1] [2]. Additional reporting and legal summaries point to other federal prosecutions—such as a Western District of North Carolina case involving David Tatum where generative AI was used to alter images of minors into child pornography, and a 2025 case involving a school teacher using AI tools to create explicit videos of students—which indicate that federal prosecutions have continued into 2024–2025 [4] [3]. These cases show prosecutors applying existing child‑pornography statutes to synthetic images.

3. Did prosecutions occur as far back as 2020? The evidence gap

None of the supplied analyses provide firm documentation of federal prosecutions specifically charging creators of AI‑generated child sexual images in 2020. Sources note a marked increase in CSAM cases since 2020 generally, but they do not identify federal indictments from that year targeting AI‑generated material. The documented federal actions appear to cluster in 2024–2025, reflecting the timeframe when generative models like Stable Diffusion became widely accessible and law enforcement publicly flagged AI‑generated CSAM as a prosecutorial priority [5] [3]. Therefore, the claim that federal prosecutions occurred in 2020 is not supported by the materials provided.

4. How courts and commentators frame liability for synthetic CSAM

The supplied analyses show two legal frames: prosecutors and federal filings treating synthetic CSAM as actionable under existing federal child‑pornography statutes, and some court rulings and commentary exploring First Amendment and other defenses in cases involving AI‑generated imagery. One report notes that possession prosecutions have raised constitutional questions in certain contexts, while DOJ filings and indictments emphasize the equivalence of AI‑generated child sexual material to real‑victim CSAM for criminal liability [2] [6]. This tension—criminal enforcement versus constitutional limits—appears in coverage and underscores that prosecution strategies and judicial outcomes may vary by charge and jurisdiction.

5. Where sources converge and where they diverge in emphasis

All sources agree that federal prosecutors have pursued cases involving AI‑generated CSAM in 2024–2025 and that DOJ has publicly signaled enforcement against creators of synthetic child sexual imagery. Sources diverge on the historical sweep: some analyses suggest prosecutions have occurred “since 2020” by pointing to a broader rise in CSAM prosecutions since then, while others document specific federal indictments only from 2024 onward [5] [3]. Another divergence concerns characterization of novelty: some outlets label the 2024 cases as possibly first‑of‑their‑kind federal prosecutions for purely AI‑generated images, while other pieces treat them as logical extensions of longstanding CSAM enforcement [1] [4].

6. Bottom line and open questions prosecutors must answer

The provable bottom line in the supplied materials is that the Department of Justice and federal prosecutors have charged individuals for creating AI‑generated child sexual images in 2024–2025, including high‑profile indictments and district‑court prosecutions; the claim that such federal prosecutions occurred as early as 2020 is not substantiated by these sources. Key open questions remain about variant outcomes across jurisdictions, appellate development on constitutional defenses, and whether earlier, unpublicized federal matters existed prior to 2024, which would require additional primary court documents or DOJ press releases beyond the supplied analyses to confirm [1] [4].

Want to dive deeper?
What US federal laws apply to AI-generated child sexual abuse material?
Have there been any convictions for creating AI deepfake CSAM since 2023?
How has the DOJ responded to rising AI child exploitation cases post-2020?
Are AI-generated images considered child pornography under the PROTECT Act?
What international precedents exist for prosecuting AI-generated child sexual images?