Which countries criminalize creation (not just distribution) of nonconsensual deepfake sexual imagery and what penalties apply?

Checked on February 2, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

A growing set of national and subnational laws now expressly target the creation — not merely the sharing — of non‑consensual sexually explicit deepfakes, though scope and penalties vary widely: some countries and U.S. states criminalize making such images with prison terms and fines, the EU has a directive that covers production, and other jurisdictions rely on existing image‑based abuse or data‑protection rules to reach creators [1] [2] [3]. Reporting shows notable examples include the United States at federal and state levels, Australia, France, Japan, Singapore and the EU bloc, with penalties ranging from fines to multi‑year prison sentences depending on place and aggravating factors [4] [5] [3] [6] [2].

1. United States — patchwork of states plus new federal statute

The U.S. presents a fragmented picture: dozens of states have enacted sexual‑deepfake statutes and several states explicitly criminalize the creation of non‑consensual sexual deepfakes (Virginia and California among the earliest examples), while recent federal legislation such as the TAKE IT DOWN Act and other bills expand criminal and removal duties at the national level [7] [8] [4] [1]. Reporting notes that many state laws were initially civil‑remedy focused but by 2024–2025 several states — including Texas, New York and Minnesota — added criminal penalties for violations, and by 2025 federal legislation created new prohibitions and platform takedown obligations; federal criminal exposure can include custodial sentences (up to three years in some formulations) and statutory fines where aggravated or involving minors [1] [4] [9] [3].

2. European Union and member states — directive plus national criminal laws

The EU’s directive on violence against women expressly criminalises the non‑consensual production, manipulation or alteration of material that makes it appear someone is engaged in sexual activity, which brings creation of sexual deepfakes within criminal reach for member states that implement the directive [2]. France is reported to have amended its Penal Code (Article 226‑8‑1) to criminalise non‑consensual sexual deepfakes with penalties cited at up to two years’ imprisonment and fines of around €60,000; the U.K. also has criminal measures and proposals with possible two‑year terms and unlimited fines under recent legislative changes referenced in reporting [3] [2].

3. Australia and Oceania — explicit criminalisation with stiff sentences

Australia has moved to criminalise creation and malicious use of sexual deepfakes: its Criminal Code Amendment (Deepfake Sexual Material) Act and related reforms make producing, possessing or distributing non‑consensual intimate deepfakes a criminal offence, with reporting citing penalties of up to six years’ imprisonment in contexts involving coercion, blackmail or harassment [5]. Australia’s approach is notable for tying deepfake production to established offences like coercion and blackmail, widening prosecutorial tools [5].

4. East and Southeast Asia — Japan, Singapore and beyond

Japan is reported to have criminalised non‑consensual intimate images whether real or AI‑generated and protects personality rights under laws governing private sexual content, with criminal penalties for violators, while Singapore’s amended penal provisions likewise target non‑consensual intimate deepfakes and link biometric/data rules to enforcement [6] [5]. These jurisdictions combine image‑based abuse statutes with data/biometric protections to reach creators as well as distributors [5] [6].

5. Caveats, enforcement gaps and the limit of reporting

The landscape remains uneven: many sources emphasize state or statutory changes without uniform language about whether a law targets creation specifically versus distribution or facilitation, and some reports conflate civil remedies, platform takedowns and criminal penalties [1] [8]. Where a source did not explicitly state that a law criminalises creation (rather than distribution), this analysis does not assert that criminality exists; available reporting documents clear criminalisation of creation in the U.S. at state level, Australia, France, Japan and Singapore, and EU‑level criminal coverage via the directive, with penalties cited above [1] [5] [3] [6] [2].

Want to dive deeper?
Which U.S. states specifically criminalize creating sexual deepfakes and what are their statutory penalties?
How does the EU directive on violence against women define and enforce criminalisation of AI‑generated intimate imagery across member states?
What legal remedies and evidentiary standards do victims need to prove when prosecuting creators of non‑consensual sexual deepfakes?