Was George Will recently a victim of AI slop?

Checked on January 5, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Yes: in late December 2025 into early January 2026, a string of videos and posts used AI-generated scripts and imagery to put fabricated commentary in George F. Will’s mouth, and independent fact-checking found the clips to be inauthentic and not traceable to Will’s published columns or social posts [1]. The case fits a wider pattern of “AI slop” — careless, misleading, or malicious AI-generated content proliferating across social platforms — rather than a single journalistic error by Will himself [1] [2] [3].

1. What happened: fake Will videos and misattributed commentary

Multiple YouTube channels and social accounts circulated short videos purporting to show George Will offering trenchant commentary on Donald Trump and the Supreme Court; Snopes’ investigation found the scripts bore hallmarks of AI generation, including overdramatic phrasing, and documented that Will had not posted or broadcast such remarks between late December 2025 and January 2, 2026, based on his social accounts and archive records [1].

2. How researchers and platforms identified the fakery

The signals of inauthenticity included not only distorted or implausible vocal and visual synthesis but also metadata clues and stylistic markers — for example, thumbnails that used Google AI-generated images and scripts that matched common AI tropes — prompting fact-checkers to label the clips deepfakes or AI-generated fabrications [1].

3. Why this is part of a larger problem, not a one-off prank

The Will deepfakes arrived amid a wave of platform-scale AI misuse: recent incidents with image- and text-generating AIs — notably controversies around X’s Grok producing sexualized images and other harmful outputs — have shown how generative tools can be repurposed at scale to create nonconsensual or deceptive content [2] [3] [4]. These episodes illustrate systemic gaps in safeguards and moderation rather than isolated misjudgments by individual creators.

4. The actors and incentives behind circulation

Snopes identified several channels that repeatedly posted inauthentic Will content — names like Capitol Transparency Watch, George Will Analysis, TwoNation News, Mind to George, and Voices of Freedom — suggesting a networked pattern of distribution that benefits from virality and political engagement even if accuracy is absent [1]. Platforms and creators can gain views, ad revenue, or political influence from sensationalized AI content, an implicit incentive to push “AI slop.”

5. Alternative viewpoints and legitimate uses of AI in commentary

Not all AI-driven visualizations or productions are inherently deceptive; outlets and creators use generative tools to illustrate commentary or distill complex arguments for audiences, and some commentators explicitly experiment with AI-assisted formats [5]. Moreover, conversations in the industry and academia acknowledge legitimate uses while calling for clearer accountability and technical safeguards [6].

6. Legal, ethical, and practical fallout

The Will case echoes prior legal and ethical confrontations over AI impersonation — from the George Carlin estate’s suit over an AI-generated comedy special to regulatory scrutiny of platforms that host harmful AI outputs — and has intensified calls for platform liability, user accountability, and better detection tools [7] [6] [4]. Fact-checkers and archives remain crucial stopgaps, but they respond after circulation rather than preventing initial damage [1].

7. Bottom line and limits of reporting

The available reporting supports a firm conclusion that George Will was targeted by AI-generated, misattributed videos and did not produce the disputed commentary; Snopes’ compilation and cross-reference to Will’s own postings and archival records underpin that finding [1]. This analysis cannot establish the identities or motives of every account that created or amplified the clips, nor can it fully quantify their reach without platform data, which goes beyond the cited sources [1] [3].

Want to dive deeper?
How do fact-checkers detect AI-generated deepfakes and what technical markers do they rely on?
What legal remedies have public figures used successfully against AI-generated impersonations?
How are platforms changing policies or tools to prevent large-scale misuse of image- and video-generating AI?