How have other jurisdictions responded to AI tools producing sexualised deepfakes and what documents did they require platforms to preserve?
Executive summary
Jurisdictions worldwide have moved from aspirational guidance to enforceable duties in response to sexually explicit deepfakes: the EU has enshrined labeling, traceability and takedown duties into binding frameworks, China has created a traceability and labeling regime, and the U.S. has seen both state-level criminal laws and a new federal civil remedy aimed at non‑consensual explicit deepfakes (with contested state measures and litigation following) [1] [2] [3] [4]. Across these regimes regulators have not only ordered removal but have increasingly required platforms to retain metadata, labeling and notice‑and‑takedown records — although public reporting is uneven about the exact “documents” each jurisdiction mandates preserved [5] [3] [2].
1. EU: mandatory labeling, machine‑readable markers and preservation for transparency and enforcement
European policy has shifted from voluntary codes to statutory obligations: the AI Act and the Digital Services Act together demand that AI‑generated and manipulated media be disclosed and that manipulated content be identifiable, with a Code of Practice pushing machine‑readable, detectable and interoperable labels to help platforms, researchers and authorities detect deepfakes [1] [5]. Integrated into this regime are duties to cooperate with fact‑checkers and researchers and to share data to improve detection tools, which implies platforms must keep records and metadata about content provenance, labeling actions and takedown logs — though public summaries describe the obligations in functional terms rather than as a specific checklist of “documents” [2] [6]. The EU’s approach therefore combines removal duties with a traceability and data‑sharing architecture intended to preserve the evidentiary trail needed for enforcement and for independent verification [5] [7].
2. China: traceability system and mandatory labeling for synthetic content
Chinese authorities have moved rapidly toward enforceable traceability and labeling standards: the March 2025 Measures for Labeling of AI‑Generated Synthetic Content create a traceability system for synthetic media and require labeling that will take effect in September 2025, signalling that platforms must retain generation metadata and provenance records to comply with audits and enforcement [3]. Reporting frames this as a formal, state‑driven requirement to ensure every synthetic asset can be tracked back to its origin, which by implication forces platforms and service providers to preserve the data necessary to demonstrate compliance [3].
3. United States: patchwork of state laws, federal civil remedies and preservation pressure
The U.S. response has been fragmented: several states enacted criminal laws targeting non‑consensual sexual deepfakes and some state measures require takedown mechanisms, but courts have already struck down at least one California law on Section 230 and constitutional grounds, highlighting legal pushback [8] [4] [9]. At the federal level the DEFIANCE Act, passed by the Senate in January 2026, creates a federal civil right of action enabling victims to sue creators, distributors and knowingly complicit hosts — a statutory regime that will generate litigation requiring platforms to preserve takedown notices, hosting records and distribution logs as evidence in civil suits [4]. Public reporting indicates platforms are being pressed to retain notice‑and‑takedown records and user content logs, but detailed federal preservation rules mirroring EU traceability mandates are not described in the available sources [10] [4].
4. Common preservation requirements across jurisdictions — metadata, takedown logs and provenance records
Across these different legal architectures a consistent pattern emerges: regulators want the provenance chain — labelling metadata, traceability logs created by content generation tools, notice‑and‑takedown reports, content hosting and removal timestamps, and cooperation records with researchers or fact‑checkers — preserved so that enforcement, victim remedies and research are possible [5] [2] [3]. Sources explicitly mention machine‑readable labels, traceability systems and required notice‑and‑takedown infrastructures (including a 48‑hour removal rule cited in multiple reports), which together mean platforms must retain both the technical metadata attached to synthetic media and the administrative documents showing how they responded [10] [3] [5].
5. Pushback, gaps and the limits of current reporting
Industry resistance and legal challenges temper aggressive regulation: investors and tech executives are funding opposition to stringent state rules, and courts have already voided some measures, illustrating the contested policy terrain in the U.S. [9]. Reporting shows clear obligations — labels, traceability and takedown timings — but does not consistently publish the exact inventories of documents each law requires preserved (for example, statutory citations or model retention schedules are not presented in these summaries), so precise compliance checklists must be drawn from the primary legislation and implementing guidance of each jurisdiction rather than from the secondary reporting reviewed here [2] [3] [4].