How have courts ruled on First Amendment challenges to state laws criminalizing creation of deepfake sexual images of adults?

Checked on January 31, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Federal and state legislatures have moved aggressively to criminalize nonconsensual sexual deepfakes of adults, but courts have only begun to sort constitutional challenges; scholars and practitioners predict sustained First Amendment litigation because the speech is content-based and may trigger strict scrutiny [1] [2]. Existing case law is sparse and mixed: courts have upheld narrow prohibitions tied to traditional unprotected categories (obscenity, child sexual abuse material, true threats), yet many scholars argue that purely adult deepfakes that are neither obscene nor used to threaten or coerce still enjoy robust First Amendment protection [3] [4] [5].

1. Laws on the books, courts in waiting

States including Virginia, California, New York and others have amended revenge-porn and related statutes to cover AI-generated sexual images, and Congress passed the TAKE IT DOWN Act addressing nonconsensual intimate images and deepfakes at the federal level—measures that make publication and certain threats illegal and impose platform takedown duties [6] [7] [8]. Those statutes have prompted immediate predictions of constitutional challenges because they regulate sexually explicit visual depictions and impose content-based restrictions, which legal analysts say will likely receive the most rigorous First Amendment scrutiny (strict scrutiny) if litigated [1] [9].

2. What courts have actually ruled so far — limited and contextual

There are relatively few published opinions directly resolving First Amendment claims to state bans on adult deepfake creation; early litigation has instead tested related questions—platform liability, platform takedown obligations, and political deepfake bans—such as X’s challenge to Minnesota’s election-focused deepfake law, which alleged censorship of political speech and illustrates how courts confront content- and context-specific challenges [10]. Prosecutors and plaintiffs have more success when statutes fit established, unprotected categories—child sexual abuse material and obscenity have long been held outside First Amendment protection, and courts have treated statutes targeting those harms more favorably than broad bans on adult sexual expression [3] [11].

3. Scholarly consensus and doctrinal fault lines

Academic commentary—ranging from Cardozo and Fordham law reviews to NYU proceedings—stresses that adult pornographic deepfakes often remain constitutionally protected unless they fall into an exception (obscenity, CSAM, true threats) or are framed as nonconsensual disclosure of intimate images tied to privacy or harassment harms; some scholars argue creative statutory drafting could survive strict scrutiny by targeting the nonconsensual nature and demonstrable harms rather than content per se [4] [5] [12]. Conversely, civil liberties groups and some First Amendment scholars warn that broadly worded takedown and criminal statutes risk overbreadth and chilling effects on lawful expression, and predict that courts will strike or narrow laws that sweep in journalistic, artistic, or political speech [9] [2].

4. Enforcement practice vs. constitutional doctrine

Practically speaking, prosecutors and victims are already using existing criminal and civil tools—revenge-porn statutes, harassment laws, obscenity statutes, and federal civil remedies—to pursue creators and distributors, and platform regulation under the TAKE IT DOWN Act adds a parallel compliance regime that may reduce online harms without directly resolving constitutional questions in courts [7] [13] [6]. Yet early litigation and reporting show that platforms and creators retain significant defenses: platforms argue Section 230 and First Amendment protections shield them absent evidence of intent to harm, and plaintiffs face evidentiary hurdles to prove mens rea or purposeful intent to inflict the specific harms many statutes require [13] [1].

5. Bottom line and what to watch for

Courts have not issued a clear, nationwide rule allowing or condemning state criminal bans on the creation of adult sexual deepfakes; instead, decisions will likely turn on statutory text, mens rea requirements, targeted harms, and whether the material fits established unprotected categories—expect a patchwork of rulings, iterative narrowing by courts for overbroad laws, and possible Supreme Court review if lower courts split on strict-scrutiny questions [1] [12] [9]. Tracking cases that test narrow, consent-focused statutes versus broad content bans—as well as litigation over platform obligations under the TAKE IT DOWN Act—will reveal whether courts prioritize harm-reduction frameworks or robust speech protections [7] [8].

Want to dive deeper?
How have courts treated First Amendment challenges to laws banning political deepfakes?
What statutory drafts have courts or scholars identified as likely to survive strict scrutiny when regulating nonconsensual deepfake pornography?
How have platforms fared in litigation over takedown duties and Section 230 defenses under the Take It Down Act?