Which U.S. states currently lack explicit statutes covering AI‑generated child sexual abuse material (CSAM)?

Checked on January 9, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Reporting shows a patchwork: at least 20 states have enacted laws that expressly prohibit AI‑generated sexual images of minors, but many state codes remain unmodified and therefore do not explicitly mention AI‑generated CSAM — and publicly available trackers and analyses do not publish a definitive state‑by‑state list of “states that lack explicit statutes” in a single, authoritative table [1] [2] [3].

1. The landscape: partial coverage, federal backstop

Multiple outlets and legal trackers document a surge of state activity addressing sexual deepfakes and AI‑created imagery, with GovTech reporting that 20 states have specifically outlawed sexual deepfakes of minors by updating CSAM statutes or adding targeted prohibitions [1], while Washington University’s AI policy resource notes “many states have amended their statutes to clarify that CSAM laws apply regardless of whether real children were used to create the images” but also acknowledges variability across state codes [2]; at the same time, federal law already criminalizes computer‑generated material that is “virtually indistinguishable” from real child pornography, creating a federal enforcement floor even where state statutes are silent [4].

2. What the sources can — and cannot — say about states that lack explicit AI‑CSAM laws

The available reporting and legal trackers collected by trade groups and law firms (IAPP, Orrick, BCLP, King & Spalding) document enacted state AI laws and ongoing legislative activity but do not, in the materials provided here, publish a complete roster of which states have not added explicit AI‑CSAM language to their penal codes; GovTech’s count of “20 states” with explicit prohibitions is cited, but the complementary list of the remaining 30 states without explicit statutes is not furnished in these sources [1] [3] [5]. Therefore, while it is accurate to say many states have not yet enacted specific AI‑CSAM provisions, the reporting supplied does not allow naming each state that presently lacks explicit statutory language without further, state‑by‑state statute checks [1] [2].

3. Why the ambiguity matters: enforcement, prosecution, and deterrence

That ambiguity creates practical gaps: prosecutors in states without explicit AI‑CSAM amendments may still rely on federal statutes that outlaw synthetic CSAM indistinguishable from real images, or on preexisting state CSAM statutes interpreted broadly, but reliance on interpretation rather than explicit statutory language can affect charging decisions, sentencing, and legislative clarity favored by advocates and platforms seeking bright‑line rules [4] [2]. Legal commentators and civil‑society advocates argue that expressly updating state codes closes loopholes and provides clearer notice for enforcement and safe‑harbor debates, a point made repeatedly in policy trackers and law‑firm briefings [2] [6].

4. Politics and incentives shaping whether states act

The policymaking pattern is shaped by competing incentives: states actively pursuing tech regulation have moved to plug AI‑CSAM gaps, while others prioritize studies, task forces, or broader AI governance frameworks first, as tracked by IAPP and state AI law trackers [3] [5]. Federal executive actions have signaled a preference for minimizing state regulatory friction while carving out child‑safety exceptions — an implicit green light for state AI‑CSAM laws according to Stanford Cyberlaw’s analysis of an administration executive order [7]. Those signals, plus pressure from victim‑advocacy groups, explain why a cluster of states acted quickly while many others remain in legislative motion [7] [1].

5. Bottom line and reporting limitation

The clearest, supportable answer from these sources: at least 20 states have enacted explicit prohibitions on sexual deepfakes of minors [1], many other states have amended CSAM statutes to cover synthetic material [2], federal law criminalizes “virtually indistinguishable” synthetic CSAM [4], and the available trackers and articles here do not provide a definitive, sourced list identifying each state that currently lacks explicit AI‑CSAM statutory language — a state‑by‑state statutory review would be required to produce that complete list [5] [3].

Want to dive deeper?
Which 20 states have passed explicit laws banning sexual deepfakes of minors, and what do those laws say?
What are the key differences between federal statutes on synthetic CSAM and state statutes that specifically mention AI?
How have prosecutors used existing federal law to pursue cases involving AI‑generated CSAM in the absence of state statutes?