How have U.S. courts ruled on possession charges for AI‑generated CSAM and what precedents exist for First Amendment defenses?

Checked on January 26, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

A recent federal district court dismissed a possession charge tied to wholly AI‑generated obscene images, finding Section 1466A unconstitutional as applied to private possession of virtual CSAM—invoking Stanley, Osborne, and Ashcroft to map limits on prosecution [1]. At the same time, prosecutors, Congress and many states have moved to treat AI‑generated CSAM as criminal, and courts have split on whether First Amendment precedents leave room to punish synthetic material that either appears indistinguishable from real-child abuse or is obscene under Miller [2] [3] [4].

1. Uneasy victory for speech protections: the Anderegg dismissal

A district court recently dismissed a federal possession charge under 18 U.S.C. §1466A as applied to an individual alleged to possess obscene images created entirely by AI, holding that the statute could not constitutionally reach private possession of virtual CSAM in light of First Amendment doctrine—explicitly relying on Stanley v. Georgia, Osborne v. Ohio, and Ashcroft v. Free Speech Coalition to delimit unprotected categories [1].

2. What Supreme Court precedent says—and does not—about virtual material

The Supreme Court has repeatedly carved out that ordinary obscene material in the home can be protected (Stanley) while child pornography involving real children is categorically unprotected (Osborne), and it struck down a broad ban on “virtual” depictions in Ashcroft v. Free Speech Coalition because no real children were harmed in production; those decisions form the doctrinal scaffolding that courts use when deciding whether AI‑made images are punishable speech [1] [5] [6].

3. The obscenity route and the Miller test as a prosecutorial path

When prosecutors seek convictions for synthetic CSAM, one pathway is to show images are legally obscene under the three‑pronged Miller test—something that requires jury factfinding and is harder to prove than strict liability possession of photographic CSAM, but it remains available and has been cited as a way to criminalize some AI material while staying within First Amendment limits [4].

4. Prosecutors and legislators pushing criminal liability for AI‑made CSAM

Despite the Anderegg dismissal, federal enforcement agencies and many prosecutors treat AI‑generated CSAM as criminal and have pursued charges for production, distribution and possession; moreover, Congress, advocacy groups and a growing number of states have enacted or proposed statutes specifically criminalizing AI or deepfake CSAM to close perceived gaps [2] [3] [7] [8].

5. Lines that matter in litigation: “virtually indistinguishable,” training data, and mens rea

Courts and scholars emphasize several critical factual lines: whether a synthetic image is “virtually indistinguishable” from real abuse (which can push it outside First Amendment protection), whether models were trained on photographic CSAM, and what mens rea the statute requires—all areas that determine whether speech is treated as criminal conduct or protected expression [9] [5] [4].

6. Residual uncertainty and the practical consequences for defendants

The litigation landscape remains unsettled: courts have not reached a uniform rule for images created entirely by generative models, commentators warn prosecutorial approaches and state statutes vary widely, and legal scholars stress evidentiary and mens rea challenges—meaning boundaries for First Amendment defenses will keep shifting as new cases and statutes test Ashcroft, Miller, and Osborne in the AI era [5] [4] [8].

7. Bottom line: precedent narrows but does not foreclose defenses

Existing Supreme Court precedents give defendants a viable First Amendment defense when material is wholly virtual and not obscene, as recognized in recent district rulings, but those defenses are fragile: prosecutors can still pursue convictions by proving obscenity, likeness to real victims, or other statutory elements and legislatures are actively tightening laws to reduce defense space [1] [4] [2].

Want to dive deeper?
How have courts applied Ashcroft v. Free Speech Coalition to AI‑generated sexual images in recent decisions?
What evidence and expert testimony do prosecutors use to show AI‑generated images are "virtually indistinguishable" from real CSAM?
How do state laws differ in criminalizing AI‑generated CSAM and which states have the broadest prohibitions?