What legal reforms have been proposed to explicitly cover AI-generated sexual content involving minors?
Executive summary
Lawmakers at federal and state levels have advanced several targeted reforms to cover AI-generated sexual content involving minors, ranging from criminalizing publication and closing evidentiary loopholes to imposing platform takedown duties and creating new state bans on creation and distribution of synthetic child sexual abuse material (CSAM) [1] [2] [3]. These proposals reflect coordinated pressure from law enforcement and attorneys general, but vary in scope, enforceability, and constitutional risk across jurisdictions [4] [5] [6].
1. Federal criminalization and platform duties: the Take It Down Act and removal timelines
A major federal response, commonly cited as the Take It Down Act, makes it a federal crime to “knowingly publish” intimate visual depictions of minors or non-consenting adults, explicitly covering “digital forgery” and requiring covered platforms to create notice-and-removal processes that must take down offending imagery within 48 hours of a victim’s request by a statutory deadline (May 19, 2026) [1] [7]. Advocates say this creates immediate operational obligations for platforms and a clear criminal backstop for malicious publishers, while industry and civil liberties observers have warned about rushed takedown mechanics and definitional overreach that could implicate lawful speech [7] [1].
2. Closing prosecutorial gaps: Protecting Our Children in an AI World Act
Congressional text for the Protecting Our Children in an AI World Act of 2025 includes a direct amendment to federal child-pornography provisions to eliminate certain affirmative defenses and broaden the statutory definition of sexually explicit conduct to capture imagery produced via AI, aiming to prevent bad actors from escaping prosecution by claiming a depiction is synthetic [2]. That legislative strategy—retooling existing federal statutes rather than creating wholly new offenses—responds to law-enforcement testimony that existing prosecutions have been hampered by statutes written for “real” photographs [4].
3. New bills targeting creation and distribution: ENFORCE and state statutes
Separate federal proposals, such as the ENFORCE Act introduced by Rep. Ann Wagner, explicitly target CSAM “generated by or modified with Artificial Intelligence,” signaling a parallel strategy of new, AI-specific prohibitions and enforcement tools [8]. Meanwhile, states continue to pass or propose laws that ban the creation, possession, or distribution of AI-generated child sexual abuse images—Maine’s proposal would align the state with dozens of other states that have enacted such bans, and Wisconsin has recently adopted legislation aimed at AI-generated CSAM [3] [6] [9].
4. Mandated reporting, schools, and prosecutors: plugging procedural holes
Several state bills add procedural mechanisms—such as requiring mandated reporters to report AI-generated CSAM—to ensure school incidents and local cases trigger investigations and referrals, a response to a spate of school-related controversies where synthetic imagery was used against students [10] [4]. Prosecutors’ offices and victim-advocacy groups have helped draft state language to try to avoid constitutional pitfalls, but lawmakers in some states have delayed or revised bills over free-speech and overbreadth concerns [6] [3].
5. Enforcement, coordination, and the emerging regulatory posture
State attorneys general are increasingly coordinating investigations and pushing platforms to show that moderation and abuse-prevention tools work in practice, signaling enforcement as well as legislative pressure; AG offices expect companies to demonstrate operational safeguards for AI risks to children [5]. Law enforcement agencies have publicly urged Congress to act, describing difficulty prosecuting AI-manipulated imagery under statutes designed around photographs—an advocacy that has driven many of the federal and state reform proposals [4].
6. International models and technical safeguards in proposals
Policy reviews and advocacy groups point to international frameworks—such as Online Safety Acts abroad—that layer risk assessments, developer-level controls, and deployment limits for high-risk generative AI systems as models for preventing AI misuse against children, suggesting that legislative fixes are being considered alongside technical and regulatory measures rather than as lone criminal penalties [11]. Reporting and legal analysis indicate the overall trend: a mix of criminal-law amendments, platform takedown mandates, state bans on creation/distribution, mandated reporting, and calls for technical safeguards are the primary reforms proposed to explicitly cover AI-generated sexual content involving minors [1] [2] [8] [9] [3].