List of all gen ai csam cases in the us
Executive summary
There is no single publicly available, authoritative list of every U.S. criminal case involving generative-AI child sexual abuse material (AI‑generated CSAM); reporting and court decisions show a small number of prosecuted matters and a larger set of investigatory findings and alerts, but the record is incomplete and fragmented [1][2]. Known prosecutions are rare but consequential, and researchers, nonprofits, and federal agencies warn that many incidents circulate online without corresponding public criminal filings [3][1].
1. Known federal prosecutions: Anderegg and related DOJ filings
One of the clearest, widely reported federal prosecutions named in public sources is the DOJ indictment of a software engineer, Anderegg, charged with producing, distributing, and possessing AI‑generated CSAM and other counts; a court rejected some motions but dismissed a private‑possession count under the child‑obscenity statute as unconstitutional as applied in that context [4][2]. Reporting about Anderegg emphasizes the government’s claim that his alleged professional AI skills and use of a version of Stable Diffusion enabled the creation of hyper‑realistic illegal images and that prosecutors framed the case as posing a serious danger, seeking detention [4].
2. FBI‑publicized case in Charlotte: AI images tied to real‑child material
The FBI published a case summary from Charlotte in which agents charged an individual with possession of child pornography related to AI‑generated images because the images were based on real minors and met federal sexual‑explicitness thresholds; the investigation also led to additional images and videos that provided prosecutors multiple avenues to pursue charges, according to the Bureau [3]. The FBI framed that outcome as an example of how hyper‑realistic AI renderings can be prosecuted when they are traceable to or derived from real victims [3].
3. State and local prosecutions: scattered examples and a Nebraska transport case
State‑level activity is uneven: advocacy tracking by Enough Abuse reports that many states have enacted statutes criminalizing AI‑generated or computer‑edited CSAM and cites a specific prosecution of a man in Omaha, Nebraska, prosecuted for transportation of CSAM that included AI‑generated material, though comprehensive case listings are not supplied [5]. This patchwork of state laws and isolated prosecutions underscores that prosecutions can occur under different theories—transportation, possession, distribution—but public documentation of every instance is lacking [5].
4. Investigations and prevalence: dozens to hundreds of online instances, few prosecutions
Journalistic and nonprofit investigations have documented many instances of AI‑generated CSAM circulating online—one multi‑outlet investigation claims to have uncovered more than a hundred instances across platforms from dark‑web forums to mainstream social media—yet these findings do not translate into a neat roster of criminal cases because many instances do not result in public indictments or court records [1]. Nonprofits and the FBI have warned that AI‑generated CSAM floods reporting systems and strains investigators, making it hard to separate prosecutable offenses from nonprosecuted circulation and to compile a complete caseload inventory [6][3][7].
5. Legal ambiguity and reporting limits: constitutional rulings and statutory gaps affect counts
Court decisions and statutory nuance further complicate any attempt at a complete list: federal CSAM statutes generally criminalize material involving real minors, while other statutes addressing “virtual” or obscene depictions have faced constitutional scrutiny—an example is a court ruling that Section 1466A could not be applied to private possession of certain “virtual” obscene CSAM—so whether particular AI images lead to charges depends on legal theories, evidentiary proof about real‑child involvement, and prosecutorial choices [2][8]. Sources repeatedly note that federal law treats AI‑generated CSAM as illegal when it depicts or is derived from real children, but gaps remain in how statutes, enforcement, and safe‑harbor approaches for researchers intersect, limiting any authoritative public roster [9][6].
6. Bottom line and evidence constraints
A verified, exhaustive list of “all” U.S. generative‑AI CSAM cases does not exist in the public record assembled by these sources; the clearest, named federal matters include the Anderegg prosecution and FBI‑publicized Charlotte matter, state trackers cite other prosecutions such as the Omaha transportation case, and journalists/nonprofits document scores or hundreds of circulating items that have not been catalogued as completed criminal cases [4][3][5][1]. Any researcher seeking a fuller roster will need to compile court dockets, DOJ press releases, state prosecutor records, and investigative reporting, and should expect gaps where investigations are ongoing, sealed, or never brought to public court filings [1][6].