Which U.S. state laws most directly criminalize or civilly penalize deceptive synthetic media used in consumer advertising?

Checked on January 3, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

New York is the clearest example of a U.S. state law that directly regulates synthetic media in consumer advertising by requiring disclosure when AI-generated “synthetic performers” appear in ads, a rule set to take effect in June 2026 [1] [2] [3] [4]. Beyond that explicit statute, most states rely on general consumer‑protection and false‑advertising laws enforced by state attorneys general to civilly (and in some jurisdictions criminally) penalize deceptive ads made with synthetic media [5] [6] [7].

1. New York: the first state to mandate disclosure of synthetic performers

New York’s legislative package (S.8420-A/A.8887-B) amends the General Business Law to require advertisers to identify when a commercial contains a synthetic performer—defined as a digital asset created or modified by AI intended to give the impression of a human performance—and the measure was signed by Governor Hochul with an effective date in June 2026 [3] [4] [2]. Multiple legal commentators and law firms characterize this as a first‑in‑the‑nation transparency mandate aimed at preventing deceptive substitution of real people’s likenesses and aligning state practice with industry and union transparency goals [1] [8] [9].

2. The broad state toolbox: UDAP/CPA statutes and state AG enforcement

Outside New York’s narrow disclosure rule, the primary mechanism for policing deceptive synthetic ads is the long‑standing patchwork of state unfair and deceptive acts and practices (UDAP) or consumer protection acts, which give state attorneys general broad civil enforcement authority and, in some states, criminal penalties for knowingly false or misleading advertising [5] [6] [7]. Legal practitioners routinely point to these general statutes as the likeliest vehicle for civil suits and administrative actions against advertisers who use AI to mislead consumers about products, endorsements, or human involvement in ads [7] [5].

3. A separate track: state laws aimed at political “deepfakes” versus consumer advertising

Several states have adopted or considered statutes specifically targeting AI‑generated deceptive content in political campaigns—laws that commonly permit candidates to seek injunctive relief or damages when deceptively depicted by synthetic media—but those statutes are focused on electoral speech and do not necessarily cover ordinary consumer advertising [10]. Thus, while the “deepfake” statutory trend informs policy, it does not substitute for explicit consumer‑ad rules except where state law language is broader [10].

4. Federal pressure and the risk of preemption or a patchwork freeze

The surge of state‑level activity prompted an immediate federal response: reporting shows that a White House Executive Order was issued in December 2025 seeking to pause or harmonize conflicting state AI laws pending a federal standard, and it directed federal agencies to review state measures that might be preempted or “onerous” [1] [8] [9]. That federal posture creates a political and legal tension—New York’s disclosure requirement may be the leading state test case but could face challenge or coordination pressure if the administration pursues uniform federal regulations [1] [8].

5. What this means in practice: enforcement pathways and gray areas

Practically, advertisers should anticipate two parallel enforcement pathways: explicit state statutes like New York’s that impose disclosure duties for synthetic performers and state UDAP/CPA claims and FTC intervention against materially misleading ads made with synthetic media, with state attorneys general positioned as primary civil enforcers and occasionally able to bring criminal actions under state law where statutes allow [2] [5] [6] [7]. Reporting and legal analyses indicate that until more states enact tailored synthetic‑media rules, the decentralized enforcement landscape will rely heavily on general deceptive‑practice doctrines and the priorities of individual state AGs [5] [11].

Limitations of the record: reporting assembled here identifies New York as the explicit consumer‑ad synthetic‑performer law and documents the role of UDAP statutes and state AGs, but it does not provide a comprehensive list of every state bill or ordinance on synthetic media or an exhaustive catalogue of criminal penalties across all states; those specifics were not contained in the cited sources [3] [5] [6].

Want to dive deeper?
Which other states have introduced or passed laws specifically about synthetic performers or deepfakes in non‑political advertising since 2024?
How have state attorneys general used UDAP statutes to bring cases involving AI‑driven deceptive advertising in the past three years?
What federal proposals or FTC actions are most likely to preempt or complement state synthetic‑media disclosure laws?