How have U.S. state attorneys general historically handled online non‑consensual sexual imagery cases and what remedies did they obtain?

Checked on February 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

State attorneys general across the United States have historically used a mix of criminal statutes, civil enforcement, and regulatory pressure to combat non‑consensual sexual imagery, obtaining injunctions, damages, settlements, and policy changes while also nudging legislatures to update laws for new technologies [1][2]. In recent years those offices have escalated coordinated investigations and public enforcement actions—especially in response to AI‑generated images and child sexual abuse material—while legal limits like the First Amendment and Section 230 shape what remedies are available and against whom they can be pursued [3][4].

1. Legal frameworks AGs rely on: state statutes and private causes of action

State attorneys general typically act within a patchwork of state criminal statutes criminalizing dissemination of intimate images and state civil laws that create private rights of action and statutory damages for victims; by 2025, most states had enacted revenge‑porn or related statutes that allow victims injunctive relief and monetary recovery, and some states authorize the AG to bring civil suits on the public’s behalf [1][5]. Court rulings have tested those laws’ constitutionality, with several state high courts applying intermediate or strict scrutiny to such statutes and upholding them as narrowly tailored to the state’s interest in preventing sexual exploitation [2][4].

2. Investigations, public letters and administrative pressure as enforcement tools

Beyond criminal prosecutions, AGs have used public investigations, demands for remedial steps, and coordinated letters to force platform or developer action; recent examples show bipartisan coalitions of more than 35 AGs asking xAI to tighten controls after its chatbot generated sexualized images, and California’s AG opened a formal probe into Grok’s production of nonconsensual intimate imagery—tactics designed to prod companies to change policies before formal charges are filed [6][7][3]. Those public interventions serve both enforcement and signaling functions: they demand tangible safeguards—logging, auditability, vendor oversight—and they put reputational pressure on platforms and app distributors to act quickly [3].

3. Remedies actually obtained: injunctions, settlements, criminal convictions and statutory damages

Historically, remedies secured by AGs and victims have ranged from criminal penalties (fines and imprisonment under state statutes) to civil monetary awards, statutory liquidated damages, attorneys’ fees, and court orders to remove or block images; some state laws specify liquidated damages or set caps (for example, statutory frameworks permitting injunctive relief and either statutory sums or actual damages exist in many states) and federal law since 2022 added a civil cause of action allowing compensatory, punitive damages, and attorney fees [1][8][9][10]. AGs have also obtained voluntary commitments and platform policy changes through investigations and coordinated demands rather than litigated judgments, a common outcome when regulators prioritize swift removal and systemic fixes [3].

4. Limits and legal challenges: First Amendment and intermediary liability

State enforcement faces constitutional and statutory constraints: several courts have scrutinized revenge‑porn laws under First Amendment doctrines, with varying applications of strict or intermediate scrutiny though many courts have ultimately rejected broad First Amendment defenses to upholding these statutes [4]. Additionally, federal doctrines like Section 230 limit civil claims against platforms for third‑party posted content, so AG strategies often focus on platform conduct (moderation policies, design choices) or on actors who create content, or push for statutory updates to reach AI‑enabled harms [4][3].

5. The AI inflection point and shifting remedies

The rapid rise of generative AI has shifted AG priorities: offices now coordinate investigations into model developers and app distributors, demand durable technical safeguards, and press for law updates to encompass deepfakes and AI‑generated CSAM—seeking assurances, audits, and, when warranted, civil enforcement or referrals for criminal violations; observers note AG enforcement increasingly targets whether companies acted responsibly once risks were known, not merely whether policies existed on paper [3][7]. This trend reflects an implicit regulatory agenda to shape industry standards through high‑visibility probes and enforcement carrots/sticks.

Conclusion: Over the last decade state attorneys general have blended criminal prosecution, civil suits, regulatory threats, and public pressure to extract removals, injunctions, financial remedies, and policy changes in non‑consensual imagery cases, and they are now expanding those tools to confront AI‑created harms—while continuing to navigate First Amendment limits and Section 230 barriers that constrain whom and how they can hold responsible [1][3][4].

Want to dive deeper?
How have courts ruled on First Amendment challenges to state revenge‑porn statutes since 2018?
What remedies have victims of AI‑generated nonconsensual intimate images successfully obtained in civil suits?
How do state attorney general investigations into tech companies typically translate into enforceable changes or settlements?