How have satire, deepfakes, and fabricated screenshots affected public health rumors about political figures in recent U.S. elections?

Checked on February 5, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Satire, AI deepfakes and fabricated screenshots have reshaped how public-health-related rumors about political figures circulate during recent U.S. elections by making false or misleading health claims more vivid, faster to spread and harder to debunk, while also creating a “liar’s dividend” that lets actors dismiss authentic evidence as fake [1] [2]. Yet the empirical footprint is mixed: AI-enabled fakes drove high-profile scares and regulatory responses, even as much political misinformation remained “cheapfake” editing or satire rather than sophisticated generative AI [3] [4].

1. How fakes make health rumors feel real — and why that matters

Deepfakes and fabricated images convert abstract allegations about a politician’s health into vivid audiovisual evidence, and that perceptual realism exploits cognitive biases that make viewers accept fabricated medical or disability claims more readily than text-based rumors [1] [5]. Researchers and commentators warned that AI tools capable of producing convincing fakes in seconds would magnify this dynamic in 2024, raising the risk that claims about a candidate’s fitness, medication, or cognitive state would be believed because the media “looked” authentic [1] [6].

2. Satire’s gray zone: harmless parody or plausible misinformation?

Satirical deepfakes and images sometimes intend to lampoon and are legally protected as parody, but platforms and courts struggle to distinguish satire from deception at scale; that legal and cultural gray zone allows satirical content to be weaponized or misread as factual, amplifying health rumors about public figures even when creators claimed parody [7] [2]. Platform policy shifts and moderation rollbacks have further blurred how satire, parody and deceptive deepfakes are policed online [1] [8].

3. Fabricated screenshots and cheapfakes remain the more frequent vector

Systematic tracking found that many election-era health-related smears still relied on “cheapfakes” — deceptively edited clips and fabricated screenshots — rather than full AI-generated deepfakes, meaning the technical label “deepfake” overstates the prevalence of generative-AI creation while the real harm comes from edited context and invented text/images that spread rapidly [4] [9]. Detection and debunking resources have been able to catch many such items, but speed and virality often outpace corrections [2].

4. Real incidents: public-health themes that surfaced in 2024-era misinformation

Notable episodes tied to health narratives included AI-assisted imagery used in political attack ads depicting candidates or figures with public-health actors (for example, AI imagery showing Donald Trump with Anthony Fauci), and an AI-voiced robocall that impersonated President Biden to suppress voting — episodes that show how health-adjacent figures and themes can be folded into election disinformation campaigns [7] [9] [3]. Such incidents spurred enforcement actions and investigations, illustrating both immediate reputational damage and legal consequences [3] [7].

5. The broader political effect: eroding trust and enabling denials

Beyond individual lies, experts describe a structural harm: the proliferation of believable fakes increases public cynicism about media and institutions and empowers bad actors to invoke the “liar’s dividend” — dismissing genuine evidence of health problems as fabricated — thereby complicating voters’ ability to assess a candidate’s fitness for office [2] [6]. This systemic erosion of trust is the chief threat many analysts flag, arguably more consequential than any single viral fake [6] [2].

6. Responses, limits and open questions

States and campaigns responded with education campaigns, new laws criminalizing malicious deepfakes and platform policy changes, while researchers stress that the problem is as much social and institutional as technological; moreover, post-2024 reviews found that despite high anxiety, AI deepfakes did not singularly overwhelm the cycle and much misinformation still used low-tech techniques, leaving open questions about scale, legislative efficacy and the balance with free speech [1] [4] [3]. Reporting and academic studies provide snapshots, but gaps remain about long-term effects on voter health perceptions and whether legal remedies will deter malicious creators [5] [7].

Want to dive deeper?
How have state laws limiting political deepfakes changed since 2023 and how enforceable are they?
What methods do researchers use to distinguish AI-generated deepfakes from cheapfakes and fabricated screenshots?
How did public health institutions and election officials collaborate to counter health-related election misinformation in 2024?