Which states require intent to harm for deepfake convictions versus those that impose strict liability, and how does that affect civil recovery?

Checked on January 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

The U.S. state-by-state approach to deepfakes is fragmented: some states and bills require proof that a creator or distributor intended to harm for criminal liability, while a smaller set of statutes criminalize publication of certain deepfake categories regardless of specific malicious intent (a functional strict-liability approach) [1] [2]. That split matters: intent-based rules raise the bar for criminal convictions and often complicate civil claims, whereas strict‑liability schemes make civil recovery more straightforward but raise First Amendment and enforcement questions [3] [4].

1. How the patchwork divides: intent-based statutes versus strict criminalization

Most state laws target harmful uses of synthetic media—election interference, nonconsensual sexual imagery, fraud—and anchor liability on the actor’s intent to deceive, injure, defraud, or influence an outcome, which aligns criminal and civil standards in many jurisdictions [3] [4]. By contrast, a minority of laws criminalize publication of intimate deepfakes in every instance or treat certain categories as per se illegal, effectively imposing liability without proving individualized intent; TechPolicy.Press identifies Washington (HB 1999) and Iowa (HF 2440) among those that have criminalized publication “in every instance,” though state statutes and later amendments sometimes layer intent elements back in [1].

2. Representative intent-based laws and what “intent” looks like in practice

Several states and federal proposals explicitly fold intent into the offense: Texas’ SB 751 criminalizes fabricated deceptive political videos when made “with intent to injure a candidate or influence an election,” federal bills like the DEEPFAKES Accountability Act and the TAKE IT DOWN Act and state acts such as Pennsylvania’s 2025 Act 35 or Washington’s HB 1205 (as described by Crowell & Moring) criminalize making or disseminating forged likenesses when done with fraudulent, injurious, or harassing intent [5] [6] [2] [7]. California’s AB 602 and other statutes require the creator or discloser to know or reasonably should have known the subject did not consent, tying civil remedies to culpable mental states [8].

3. Where strict liability or broad prohibitions appear and the tensions they raise

Some laws—particularly earlier or narrowly targeted statutes—treat certain deepfakes (especially involving minors) as categorically illegal, and TechPolicy.Press notes a minority of states have criminalized publication of intimate images “in every instance” [1]. Those regimes simplify enforcement and civil recovery because plaintiffs and prosecutors need not prove a subjective intent element, but the approach raises constitutional and prosecutorial concerns and often prompts carve‑outs for satire, newsworthiness, or platform intermediaries [1] [8].

4. How the intent standard changes civil recovery prospects

When statutes and common‑law causes of action require intent to harm, victims face higher evidentiary burdens in both criminal prosecutions and parallel civil suits: proving a defendant intended reputational, financial, or emotional damage often requires digital forensics, communications records, or admissions, and anonymity and cross‑border hosting complicate that proof [3] [9]. Where states provide strict or statutory liability, plaintiffs can pursue damages more directly; California’s AB 602, for instance, creates private causes of action with statutory damages tiers, punitive damages, and fee shifting, making civil recovery more predictable and sometimes lucrative even absent a criminal conviction [8] [4].

5. Practical consequences and enforcement limits

Fragmentation means outcomes turn on jurisdiction, the labeled offense (election interference, nonconsensual pornography, fraud), and the available remedies: criminal convictions under intent‑based statutes are harder but can carry prison terms and inform civil damages, while strict liability statutes speed civil relief but invite challenges on scope and speech protections [4] [2]. Across the board, detection, attribution, and the internet’s anonymity remain practical barriers to both criminal enforcement and civil recovery, as victims may be unable to identify or serve remote or anonymous perpetrators even where laws are favorable [9].

6. Bottom line: law in motion, remedies uneven

The landscape is unsettled and evolving: many states favor intent‑based prohibitions that preserve defenses for satire and error but slow down remedies, while a smaller group imposes categorical bans that ease civil recovery at the cost of constitutional and enforcement tradeoffs; federal bills and recent acts add another overlay but have not fully standardized standards nationwide [1] [6] [7]. Reporting and legal analyses underscore that victims’ ability to secure damages depends less on a single national rule than on which state’s statute applies, the nature of the deepfake, and whether plaintiffs can prove intent or rely on a statutory strict‑liability scheme [3] [8] [4].

Want to dive deeper?
Which states provide statutory damages and attorney’s fees for victims of nonconsensual deepfakes, and how large are those awards?
How do courts balance First Amendment defenses (newsworthiness, satire) against strict deepfake prohibitions in different states?
What technical and investigative methods do plaintiffs use to establish authorship and intent in deepfake litigation?