What legal or regulatory actions have been taken against platforms hosting AI impersonations of public figures?

Checked on December 4, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Federal and state authorities have moved quickly: Congress passed the bipartisan TAKE IT DOWN Act on May 19, 2025, criminalizing non‑consensual intimate-image “deepfakes” and forcing platforms to remove them on notice [1] [2]. Regulators such as the FTC have proposed broad rules to ban AI-enabled impersonation and to hold platforms and AI‑service providers liable when they “know or have reason to know” their tools will be used for impersonation [3] [4]. States have enacted a patchwork of criminal and civil laws—Pennsylvania and Washington are examples—while dozens more have targeted deepfakes, voice replicas and disclosure rules [5] [6] [7].

1. Federal criminal law: the TAKE IT DOWN Act created the first large federal takedown duty

Congress enacted the TAKE IT DOWN Act, signed into law in May 2025, which makes it a federal crime to knowingly publish or threaten to publish non‑consensual intimate visual depictions (including AI deepfakes) and requires covered platforms to take down such content when notified; violations can trigger civil fines and criminal penalties and the statute is being framed as the first major federal intervention on deepfake NCII (non‑consensual intimate imagery) [1] [2] [8].

2. Federal agency rulemaking: the FTC is moving from proposals to expanded enforcement tools

The Federal Trade Commission has repeatedly signaled it will treat AI‑enabled impersonation as an unfair or deceptive practice and proposed a supplemental rulemaking that would (a) prohibit impersonation of individuals and (b) extend liability to firms that provide goods or services (including AI platforms) when they know or have reason to know the tools will be used to impersonate people—an expansion of the FTC’s 2024 impersonation rule aimed at government and business impersonation [3] [9] [10].

3. Federal criminal proposals aimed at public officials: Congress has continued drafting targeted statutes

Members of Congress introduced the AI Impersonation Prevention Act of 2025 to amend 18 U.S.C. § 912 to criminalize AI‑based impersonation of federal officials, defining “impersonates” to include false representations “reasonably likely to cause another person to believe the content is authentic” and carving out labeled satire/parody exceptions—showing lawmakers’ willingness to criminalize certain political or official impersonations [11] [12].

4. State laws and a dense patchwork: criminal penalties, publicity rights, and disclosure requirements

States have been active: dozens have adopted AI or deepfake laws in 2025 addressing disclosure, consent, and criminal penalties; Pennsylvania’s 2025 Act 35 (SB 649) made it a crime to create or disseminate deepfakes with fraudulent or injurious intent and requires enforcement tools for prosecutors, while Washington’s law also criminalizes intentional use of “forged digital likenesses” to defraud or harass [5] [7] [6]. Other states have statutes protecting performers, making certain contracts unenforceable for voice/likeness replicas, or requiring AI disclosures in commercial contexts [7] [13].

5. Platforms and civil litigation: private suits over publicity, defamation and service provider liability

Litigation has targeted platforms and AI‑tool providers: right‑of‑publicity suits and false‑advertising claims (for example, voice actors suing text‑to‑speech providers) have been filed, and AI‑generated defamatory statements have spawned novel defamation suits that test the “actual malice” standard for public figures and where liability should land—cases so far have mixed outcomes and few clear precedents [14] [15] [16]. Courts and plaintiffs are testing whether platforms, model creators, or end users should carry the legal burden [17] [15].

6. Enforcement gaps and competing priorities: free speech, election speech, and preemption fights

Legal interventions are uneven and contested: scholars warn criminalizing broad categories of AI speech risks First Amendment friction, especially for satire and political speech; California’s election‑period deepfake restrictions faced a federal court challenge and portions were struck down, highlighting friction between state-level protections and constitutional limits [18] [8]. Meanwhile, proposals to preempt state laws and create a single federal regime have surfaced, setting up a federal vs. state showdown over who governs AI speech and platform duties [6] [18].

7. What enforcement looks like in practice: takedowns, civil suits, and prosecutions

In practice regulators blend remedies: the TAKE IT DOWN Act imposes platform takedown duties for NCII [1]; the FTC is preparing trade‑rule style prohibitions and vendor liability under consumer‑protection law [3]; states pursue criminal prosecutions for fraud or harassment when deepfakes cause harm [5]. Civil plaintiffs pursue publicity, defamation and contract claims against creators and platforms when identity or commercial harms occur [14] [16].

8. Limits of current reporting and open questions

Available sources document statutes, proposals and lawsuits but do not provide a comprehensive catalog of every enforcement action against specific platforms for individual impersonations; sources do not mention detailed outcomes of prosecutions under the TAKE IT DOWN Act or final judicial rulings resolving whether providers with “reason to know” must be held civilly liable in all contexts—those questions remain under litigation and rulemaking (not found in current reporting) [1] [3].

Bottom line: lawmakers and regulators have shifted from warning to action—passing a federal takedown statute, expanding FTC rulemaking, and empowering states—while courts and civil suits are still defining how existing First Amendment, defamation and publicity doctrines apply to AI impersonation [1] [3] [5].

Want to dive deeper?
Which countries have passed laws specifically banning AI deepfakes of public figures?
What major lawsuits have been filed by public figures over AI-generated impersonations?
How are social media platforms updating terms of service to address AI-created likenesses?
What regulatory guidance have agencies like the FTC, EU, and Ofcom issued on AI impersonation?
How do defamation and publicity-rights laws apply to AI-generated content of politicians and celebrities?