What legal standards and statutes apply to creating AI-generated non-consensual images without distribution in the US and EU?
Executive summary
The legal landscape for creating AI-generated non-consensual images without distributing them is fragmented: in the United States, federal law and many state statutes now criminalize distribution or threats to distribute non-consensual intimate imagery and broaden child‑pornography rules to cover AI-generated material, but whether mere creation or solicitation is itself a crime varies by statute and jurisdiction [1] [2] [3]. In the European Union, the new AI Act and existing privacy/data‑protection regimes create overlapping restraints—mandating transparency, banning certain biometric manipulations and empowering data‑rights remedies—yet Member State criminal laws differ on whether producing a non‑distributed deepfake is independently punishable [4] [5] [6].
1. U.S. federal baseline: TAKE IT DOWN criminalizes publication and threats, not necessarily private creation
The federal TAKE IT DOWN Act, signed into law in May 2025, makes it a crime to knowingly publish or threaten to publish non‑consensual intimate imagery, explicitly including AI‑generated deepfakes, and it imposes penalties up to three years in prison while also imposing platform notice‑and‑takedown obligations—however, the statute targets publication and threats rather than every act of private creation without distribution [1] [7] [2]. Legal reporting and advocacy resources emphasize that the Act establishes a “reasonable person” test for what counts as indistinguishable NCII and creates civil remedies and platform duties, but it does not universally criminalize the mere act of prompting an AI to produce an intimate image that is kept private [1] [8].
2. State laws and child‑protection expansions: creation can be criminal when minors are implicated
Many U.S. states have amended CSAM statutes to encompass AI‑generated or computer‑edited sexual imagery of minors, with a large number explicitly criminalizing possession, creation, or distribution of synthetic child sexual abuse material—and some state laws go so far as to outlaw creating images that merely “appear to be” minors even if no real child was used—so creation alone can be felonious when the output depicts minors [2] [3]. States have also adopted revenge‑porn and non‑consensual intimate image laws that criminalize sharing or threatening to share adult deepfakes, and several jurisdictions explicitly update definitions to capture AI‑assisted production, but heterogeneity remains between states on whether private, never‑shared generation is punishable [3] [9].
3. The EU framework: AI Act, data protection and rights to erasure constrain tools and producers
The EU’s AI Act prohibits certain high‑risk or prohibited AI practices—such as biometric categorisation that infers intimate attributes—and requires transparency and fundamental‑rights assessments for systems used in identification, creating a regulatory regime that can indirectly curb creation of identity‑manipulating deepfakes and impose obligations on providers, though it does not uniformly criminalize private generation absent use or dissemination [4]. Complementary EU regimes like the GDPR and national privacy laws give individuals data‑protection claims (rights to erasure, objection, and compensation) where their biometric or personal data are used to create likenesses, and Member States are pursuing criminal rules against NCII so that enforcement can operate through both administrative data‑law remedies and criminal prosecution depending on local law [6] [5].
4. Tensions, gaps and contrasting interpretations: is “asking” illegal?
Journalistic reporting and legal analysis show a real tension: some commentators and outlets note that, practically speaking, it remains lawful in certain jurisdictions to prompt an AI to create an intimate deepfake and keep it private—because many statutes target publication—while other recent national laws expressly criminalize the creation or request of non‑consensual intimate images, meaning liability can attach to the act of generation itself in those places [10] [11]. That divergence highlights enforcement uncertainty: federal U.S. law and many state laws focus on distribution and platform obligations [1] [2], whereas a growing number of criminal statutes and national measures—especially where child images are involved—treat creation as a standalone offense [3] [11].
5. Practical takeaway and unresolved questions
Legally, creating AI non‑consensual imagery without distributing it can be safe from prosecution in some jurisdictions but criminal in others—particularly where CSAM rules or explicit creation‑or‑request offenses exist—and across the U.S. and EU creators also face civil claims under privacy, data‑protection and tort law and regulatory obligations on platform and AI providers that can lead to takedown and damages even absent criminal charges [2] [6] [1]. Reporting and statutes cited here reveal rapidly shifting law and real gaps—most notably inconsistent treatment of private generation, ongoing constitutional and Section 230 challenges to state measures, and the interplay between criminal, civil and administrative enforcement—which means legal risk turns on the specific jurisdiction, whether minors are implicated, and whether the content is shared or threatened to be shared [7] [10] [1].