How have courts ruled on liability for creators of AI sexual content (cases 2020-2025)?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
Courts and legislators from 2020–2025 have split liability into two tracks: civil copyright and tort claims against AI developers and platform operators, and criminal/regulatory actions targeting AI-generated sexual content—especially child sexual abuse material (CSAM)—with judges offering mixed rulings on when AI training or outputs create liability and legislatures criminalizing various harms [1] [2]. The dominant pattern is legal uncertainty: some federal rulings have allowed copyright and related claims to proceed against developers, while new statutes and prosecutions have made creation, distribution, or hosting of AI-generated CSAM explicitly punishable [1] [2] [3].
1. How courts treated copyright and “training” liability — civil claims that can underlie sexual-content suits
Federal courts in 2025 divided sharply on whether using copyrighted material to train models is protected fair use, and those disagreements shape liability exposure for creators of AI sexual content that relies on scraped images or identity-based prompts: for example, the Delaware court in Thomson Reuters v. ROSS Intelligence rejected a fair-use defense for a commercial search product and granted summary judgment against the defendant’s fair-use claim, signaling that commercial, non‑transformative uses of copyrighted databases may not be sheltered [4] [5]. By contrast, later rulings in June 2025 in Kadrey v. Meta and Bartz v. Anthropic treated “training” as likely fair use while emphasizing downstream exposure for businesses that exploit model outputs—courts thus distinguished training from generation but left open liability for outputs used in commerce [6]. Those mixed outcomes mean plaintiffs alleging harm from AI sexual imagery can sometimes survive motion-to-dismiss stages when they plausibly allege specific copyrighted works were used or when outputs closely mirror protected expression [1].
2. Claims based on likeness, nonconsensual intimate imagery, and “promoting infringement”
Courts have permitted some copyright-based and related claims to move forward when plaintiffs plausibly allege the model reproduces or imitates identifiable artists’ works or that systems were designed to facilitate infringement: Andersen v. Stability AI is an example where the court held allegations that prompts produced images “similar to plaintiffs’ artistic works” were sufficient to proceed and that a model could be alleged to “promote infringement” [1]. Other courts dismissed overbroad theories—Kadrey rejected the idea that a model’s weights are themselves derivative works—showing judges push back on novel, sweeping liability theories while allowing narrower, fact-specific claims to continue [1] [6].
3. Criminal and regulatory focus on AI-generated sexual content, especially CSAM
Separate from copyright law, legislatures and prosecutors have moved aggressively against AI-generated sexual imagery involving minors: Congress and states enacted laws in 2024–2025 criminalizing AI-generated or computer-edited CSAM, and federal statutes such as the TAKE IT DOWN Act criminalize knowingly publishing intimate depictions of minors or non-consenting adults and certain “digital forgeries” intended to cause harm, with additional platform removal duties slated to begin in 2026 [2] [3]. Advocates and enforcement bodies report a surge in AI-generated CSAM reports to NCMEC and numerous state-level criminal statutes now targeting such content, making criminal liability for creators or distributors a clear and growing legal risk even where civil copyright strategies may fail [3] [2].
4. Practical effect: layered, unsettled liability that depends on cause of action
The current legal landscape is layered and context-dependent: copyright suits can succeed if plaintiffs tie outputs back to specific training uses or close copies (courts have permitted such claims to proceed in some cases) while other courts have favored defendants on fair-use or authorship grounds [1] [4]. At the same time, legislatures have created bright-line criminal prohibitions for AI CSAM and statutes requiring platforms to act quickly on nonconsensual intimate imagery, shifting immediate enforcement pressure away from purely civil copyright remedies toward criminal and regulatory frameworks [2] [3]. Reporting and case trackers show many active litigations remain unresolved and appellate review could realign doctrines on fair use, authorship, and downstream liability [7] [8].
5. What this means for victims, creators, and platforms
Victims seeking redress should expect multiple legal avenues—copyright, privacy/tort claims, and statutory remedies for nonconsensual images—but outcomes hinge on narrow factual showings about training data, identifiability, and harm, and criminal statutes now provide more definitive enforcement tools for AI-generated CSAM [1] [2] [3]. The competing agendas are clear: content owners push courts to expand remedies and impose licensing duties on AI firms [9] [10], while tech defenders stress innovation and warn against overbroad liability; lawmakers have begun to cut through that debate by criminalizing particularly harmful AI sexual imagery even as civil law continues to evolve [9] [2].