Which countries explicitly include computer‑generated or AI‑created sexual images of minors in their criminal statutes?

Checked on January 19, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

A small but growing set of jurisdictions have moved to spell out that synthetic child sexual images — including those made by AI — fall within criminal law: notably the United Kingdom’s recent Crime and Policing measures and several steps in the United States at both federal and state levels (including Texas) that treat “digital forgeries” or AI systems intended to produce CSAM as criminal [1] [2] [3] [4]. Many other countries and regional frameworks are actively debating or creating regulator duties (watermarking, takedowns, platform obligations) but, based on the reporting provided, do not yet have clear, explicit statutory language criminalizing AI‑created child sexual images in the same way the UK and certain U.S. laws do [5] [6].

1. United Kingdom — explicit statutory change to target AI image‑generators

The UK government moved to make it a crime to possess, create or distribute AI tools designed to generate sexualised images of children and to close a gap in existing law by criminalising “digital models or files designed to be used to create CSAM,” with penalties announced for possession and distribution and a proposed Crime and Policing Bill to create an offence of making, adapting, possessing or supplying a child sexual abuse image‑generator [1] [3] [7]. Domestic agencies such as the Internet Watch Foundation treat synthetic images that appear real as equivalent to real CSAM under UK law, reinforcing the government’s intent to prosecute AI‑created depictions [8].

2. United States — a patchwork: new federal language plus state steps (Texas)

At the federal level, recent legislation dubbed the TAKE IT DOWN Act criminalises knowingly publishing intimate depictions of minors as well as “digital forgery” — defined to include intimate visual depictions created through AI — and imposes platform takedown duties, marking an explicit federal recognition that AI‑made forgeries of minors fall within criminal scope [4]. Separately, at the state level Texas adopted an AI statute effective January 1, 2026, that specifically prohibits developing or distributing AI systems with the “sole intent” of producing or aiding production of child pornography and sexually explicit deepfakes involving minors, an explicit statutory bar on certain AI tools [5] [4].

3. Other democracies and regional frameworks — law, policy and interpretive gaps

Several jurisdictions and multinational instruments are addressing AI‑generated sexual imagery through platform duties, transparency rules, and proposed regulations (for example EU AI Act transparency measures and national platform regulation), but the scholarly and reporting record in the materials provided shows nuance rather than blanket criminalisation: the EU’s package and academic reviews demonstrate concern and evolving criminal law responses but do not, in the supplied sources, identify a clear list of EU member states that have expressly rewritten criminal statutes to mention “AI‑generated” CSAM by name [5] [6]. Australia’s laws criminalise creation, sharing and possession of sexual images of children and give platforms takedown obligations, yet reporting notes ambiguity about whether every state or territory explicitly frames AI‑generated imagery in the statutory text rather than relying on interpretation of existing CSAM definitions [9].

4. Why statutes differ: enforcement, technical framing, and political incentives

Differences in statutory language reflect practical and political choices: some governments (the UK, parts of the U.S.) opted to draft explicit offences to avoid interpretive gaps that prosecutors and law enforcement might face when confronting images with no real victim, while others prefer expanding platform duties, reporting rules and civil penalties to move faster than the criminal code can be amended [3] [4] [5]. Advocacy groups and watchdogs press for explicit prohibitions because AI images normalise abuse and can be used for grooming or sextortion, an argument that underpinned the UK move and IWF reporting on rising incidents [10] [8]. Industry and civil liberties actors often urge precision to avoid overbroad bans that could chill legitimate research, artistic expression or technical work — a tension visible in debates over the reach of criminal statutes versus regulatory measures [6].

5. Bottom line

Based on the reporting provided, the clearest, explicit statutory inclusions of AI‑generated sexual images of minors appear in the UK’s recent measures and in specific U.S. laws: the federal TAKE IT DOWN Act’s “digital forgery” language and Texas’s state AI statute; other jurisdictions and regional regimes are actively addressing the problem but, in these sources, rely more on regulatory duties, prosecutorial interpretation, or remain in the proposal stage rather than listing explicit criminal‑code language naming AI‑generated CSAM [1] [3] [4] [5] [6]. If a definitive, global roster is required, further statutory text searches beyond the provided reporting would be necessary.

Want to dive deeper?
How does the UK Crime and Policing Bill define and penalize AI image‑generators for CSAM?
What is the text of the TAKE IT DOWN Act’s definition of “digital forgery” and how have U.S. courts interpreted it?
Which EU member states have proposed or enacted criminal statutes explicitly mentioning AI‑generated child sexual abuse material?