What statutes could be used to bring criminal charges over AI‑generated sexual deepfakes in California?

Checked on January 29, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

California now has a patchwork of criminal statutes and implementing laws that prosecutors can deploy against creators and distributors of AI‑generated sexually explicit deepfakes: the marquee state criminal statute is SB 926, child‑focused AI CSAM is covered by AB 1831 and expanded Penal Code sections, existing Penal Code fraud and impersonation provisions can be applied in some cases, and federal law and platform‑reporting mandates supplement state enforcement — though key constitutional and implementation questions remain [1] [2] [3] [4].

1. SB 926 — the primary state criminal weapon against nonconsensual sexual deepfakes

Senate Bill 926 creates a crime focused specifically on AI‑generated sexually explicit deepfakes by expanding traditional nonconsensual pornography offenses to cover “photo realistic,” computer‑generated images or other pictorial representations where a reasonable person would believe the image is authentic and distribution is done with knowledge or reckless disregard that it will cause serious emotional distress [5] [1] [6].

2. AB 1831 and expanded child‑pornography Penal Code sections for AI‑generated CSAM

For deepfakes depicting minors, Assembly Bill 1831 expressly criminalizes creation, distribution, and possession of AI‑generated child sexual abuse material by amending and extending Penal Code sections governing CSAM (Pen. Code §§311, 311.2, 311.3, 311.4, 311.11, 311.12 and related provisions), placing AI‑made depictions of minors squarely within existing child‑pornography offenses [2] [3] [7].

3. Existing Penal Code provisions prosecutors can repurpose: revenge porn, disorderly conduct, impersonation, fraud

California’s existing revenge‑porn framework and miscellaneous offenses remain in play: Penal Code 647(j) has been invoked in revenge‑porn and related prosecutions to criminalize sharing explicit images without consent, and SB 926 built onto these structures [8] [5]. The Attorney General’s legal advisory further flags use of Penal Code fraud and false‑impersonation statutes (e.g., §§529, 530) where AI is used to impersonate a person to obtain money, property, or other benefits [3].

4. Civil remedies, private causes of action, and statutory damages that buttress criminal enforcement

California’s AB 602 provides a private right of action against persons who create and intentionally disclose sexually explicit material when they know or reasonably should have known the depicted individual did not consent, authorizing statutory damages, punitive damages, injunctive relief and attorneys’ fees — a civil complement that shapes prosecutorial strategy and victim remedies though it is not itself a criminal statute [9] [10].

5. Federal overlay and platform obligations that affect criminal enforcement and removal

Federal action complements state statutes: the TAKE IT DOWN Act (S.146) and other federal measures make nonconsensual intimate images (authentic or AI‑generated) a federal offense, while California laws such as SB 981 and AB 2655 impose platform reporting and content‑control obligations that facilitate complaints, takedowns, and evidence collection for criminal cases [4] [11] [3].

6. Practical limits, constitutional questions, and enforcement realities

Even with these tools, enforcement faces real limits: courts have enjoined parts of California’s deepfake election restriction on First Amendment grounds, evidencing constitutional risk to overbroad restrictions; ambiguities in definitions (what counts as “photo realistic” or “reasonably should have known”) will drive litigation and may narrow prosecutorial reach; and the Attorney General and DA offices must coordinate technical evidence gathering and platform cooperation to make charges stick [4] [9] [3]. Advocacy groups and performers’ rights coalitions pushed for these laws, while platform compliance requirements reflect both consumer‑protection aims and industry pressure to avoid heavy regulatory burdens [6] [12].

California’s legal landscape therefore gives prosecutors several criminal statutes to bring charges over AI‑generated sexual deepfakes — SB 926 for adult nonconsensual deepfakes, AB 1831 and expanded Penal Code CSAM provisions for AI child sexual content, traditional Penal Code fraud/impersonation provisions for deceptive uses, and federal statutes as a backstop — but the precise reach of those statutes will be shaped by forthcoming prosecutions, platform cooperation, and constitutional challenges [1] [2] [3] [4].

Want to dive deeper?
How do prosecutors prove knowledge or recklessness under SB 926 in AI deepfake cases?
What technical forensic methods are used to attribute and authenticate AI‑generated deepfakes for court evidence?
How have courts ruled on First Amendment challenges to state deepfake and AI content laws?