Which U.S. states specifically criminalize creation (not just distribution) of AI‑generated sexual images — state statutes and code citations?
Executive summary
A growing but uneven patchwork of state laws now criminalizes the production (not merely distribution) of AI‑generated sexual imagery, especially where victims are minors; notable, explicit statutory prohibitions include California’s AB 1831 and several states that have amended CSAM or “deepfake” statutes to cover creation, but no single public source in the reporting provides a definitive, citation‑by‑citation nationwide list (Enough Abuse; Crowell & Moring) [1] [2] [3].
1. California: an explicit statute criminalizing creation of AI‑generated CSAM
California enacted AB 1831 to criminalize the creation, distribution, and possession of AI‑generated child sexual abuse material (CSAM), and reporting identifies AB 1831 as expressly covering artificial‑intelligence generated CSAM and making production itself a crime [1] [4].
2. Texas and statutory complexity: multiple provisions targeting production and intent
Texas enforces a trio of criminal provisions that reach AI‑generated sexually explicit media: Penal Code §21.165 is cited for unlawful production/distribution of certain sexually explicit media, and other statutes (including §43.26/§43.261 variants discussed in reporting) can reach non‑consensual deepfakes and obscene material, with Texas’s approach layering obscenity tests and intent elements that affect prosecutions for creation as well as distribution [5].
3. Washington, New Jersey and other states adding adult‑deepfake creation crimes
Washington recently created a criminal offense titled “Disclosing Fabricated Intimate Images,” which criminalizes disclosure of sexually explicit AI‑generated videos and can carry gross misdemeanor or felony penalties for repeat offenders, making the creation/disclosure of such imagery a prosecutable act beyond mere sharing [6]. New Jersey’s 2025 law (P.L.2025, c.40) likewise establishes criminal penalties for producing or disseminating deceptive audio/visual media and makes creation or use of deepfakes to victimize minors or adults a crime [7].
4. Broad pattern: many states amend CSAM or revenge‑porn laws to criminalize creation, but statutory language varies
Advocacy and legal trackers report that roughly 20 states have amended CSAM laws to explicitly prohibit sexual deepfakes of minors and about 30 states address nonconsensual deepfake intimate imagery in some form, but statutory wording differs—some statutes explicitly criminalize “producing” or “creating” AI/computer‑generated images, while others use terms like “digitally reproduced,” “morphed,” or “produced by electronic means,” leaving ambiguity about whether pure AI generation (not based on a real photograph) is covered [8] [2] [9].
5. Federal overlay and enforcement priorities complicate the map
The Take It Down Act makes it a federal crime to knowingly publish sexually explicit images—real or digitally manipulated—without consent and imposes platform takedown duties, which interacts with state laws by criminalizing publication and pressuring platforms to remove material even where state statutes differ; reporting notes the federal law’s immediate criminal prohibitions and platform obligations effective through 2026 [3] [10] [9].
6. Limits of available reporting and how to get definitive citations
Public reporting and legal trackers (Enough Abuse, Crowell & Moring, Orrick, GovTech, Morgan Lewis) document many state moves but do not, in the supplied sources, deliver a complete set of statute citations for every state that criminalizes creation rather than only distribution; Enough Abuse provides an interactive state-by-state database and legal trackers can be used to extract exact code sections [2] [11] [12]. Researchers seeking a definitive, citation‑level inventory must consult each state’s revised penal code or the cited legal databases because the reporting shows wide variance in statutory language and separate enforcement mechanisms [2] [11].
7. Competing legal lenses and prosecutorial practicalities
Even when statutes name creation, prosecutions can hinge on mens rea (intent to harm), obscenity standards (Miller test invoked in Texas), and First Amendment limits—factors that shape whether creation of adult deepfakes is charged or survives constitutional challenge—so the presence of a statute does not guarantee straightforward criminal liability in every case [5].
Exactly which states criminalize creation (not just distribution) therefore can be confirmed only by reviewing the individual statutes cited in legal trackers and state codes: clear examples in the reporting include California (AB 1831), Texas (multiple Penal Code provisions discussed), Washington’s fabricated intimate‑images law, and New Jersey’s P.L.2025 c.40, while numerous other states are reported to have amended CSAM/deepfake laws to reach creation but require statute‑level citation checks to be definitive [1] [5] [6] [7] [8] [2].