How do U.S. state laws differ on criminalizing creation or possession of AI-generated explicit images of fictional characters?
Executive summary
U.S. law is a patchwork: federal law already treats AI-generated sexual depictions of minors as child sexual abuse material and bars their creation and distribution, while states vary widely in whether and how they criminalize AI-generated explicit images of fictional characters, especially when those characters resemble adults or real people [1] [2]. Some states have extended existing revenge‑porn or child‑exploitation statutes to explicitly cover synthetic imagery and impose criminal and civil liability, but approaches differ by scope, mens rea, remedies, and enforcement timelines [3] [4] [5].
1. Federal baseline: AI‑generated images of minors are already illegal and platforms must act
Congress’s recent TAKE IT DOWN Act makes it a federal crime to knowingly publish intimate images of minors or nonconsenting adults and defines “digital forgery” to cover AI‑created intimate depictions, and it requires covered platforms to implement notice‑and‑removal procedures by May 19, 2026 [2] [6]. Separate federal statutory language used in child pornography definitions already treats computer‑generated depictions of minors as child pornography under 18 U.S.C. 2256, meaning creation, possession, or distribution can trigger federal prosecution [1] [2].
2. States close loopholes: many expand CSAM laws to cover synthetic material
A number of states have amended child‑exploitation statutes to ensure AI‑generated CSAM is prosecutable even when no real child was used, closing what advocates and prosecutors saw as a loophole in earlier wording [5] [7]. Examples in reporting include Arizona’s HB 2678 and several states that have explicitly criminalized producing or sharing AI‑generated CSAM, reflecting a common legislative priority to treat synthetic depictions of minors the same as traditional CSAM [5] [7].
3. Divergence on fictional adults and revenge‑porn analogues
Where states diverge most sharply is adulthood and fictional characters: pornographic depictions of fictional adults are generally treated as protected speech under the First Amendment in many analyses, but several states have extended revenge‑porn statutes to cover AI‑generated sexually explicit images when the content is presented as real, created with intent to harass or cause emotional distress, or when it imitates a real person [8] [4]. California’s AB 621, for instance, broadens “digitized sexually explicit material,” bars minors’ consent to creation/distribution, increases statutory damages up to $250,000 for malicious violations, and creates stronger civil enforcement tools while limiting certain defenses for defendants [3].
4. Intent, mens rea, and criminal design vary by state
State statutes differ in whether they require proof of harmful intent—many criminalize creation or distribution only when done to harass, intimidate, cause financial loss, or inflict emotional distress—while others adopt strict liability approaches for CSAM categories [4] [7]. That split matters practically: a user who generates an explicit image depicting a fictional adult character may avoid criminal liability in jurisdictions focused on intent, while in states that treat certain classes of synthetic imagery as per se unlawful the same act could be prosecuted [4] [5].
5. Enforcement architecture, remedies, and platform duties are inconsistent
Beyond criminal penalties, states take different routes—some add civil remedies, statutory damages, and affirmative prosecutorial authority; others impose reporting or content‑removal duties on platforms [3] [9]. The federal TAKE IT DOWN Act mandates platform notice‑and‑removal mechanisms but does not preempt state laws, so platforms face overlapping federal deadlines and a mosaic of state reporting, takedown, and liability regimes [6] [5].
6. The practical picture: patchwork law, fast‑moving policy, and open questions
Taken together, the sources show a fast‑evolving, nonuniform legal landscape: most states and the federal government aggressively target AI‑generated sexual imagery of minors, many states have updated revenge‑porn or harassment statutes to capture malicious synthetic deepfakes of adults, and others pursue hybrid civil‑criminal regimes with varied intent and remedy thresholds—yet no single, uniform standard governs fictional adult characters across jurisdictions [2] [7] [3]. Reporting and trackers (e.g., Orrick’s state AI law tracker) indicate ongoing legislative activity, and gaps remain in cross‑state comparability and First Amendment challenges that courts may ultimately resolve [10] [11].