How have AI‑generated political deepfakes been used in U.S. elections since 2020, and how have platforms responded?
Executive summary
Since 2020 AI-generated political deepfakes have appeared in U.S. elections as a mix of novelty, targeted deception and trolling—ranging from phony robocalls and cloned voices to AI-created images used in attack ads—while platforms and policymakers have scrambled to detect, label, restrict and sometimes retreat from earlier moderation standards as the technology and political pressures evolved 2024-election-from-deepfakes-to-foreign-interference-239878" target="blank" rel="noopener noreferrer">[1] [2] [3] [4].
1. The playbook: how deepfakes have been deployed
Bad actors have used synthetic audio, images and video to impersonate candidates, suppress turnout and manufacture spectacle: AI-generated robocalls impersonating Joe Biden sought to deter primary voters in New Hampshire, AI-cloned voices were used to make a local candidate appear to condone violence, and AI-created images have been inserted into campaign ads to imply compromising relationships or events [1] [2].
2. Notable examples that shaped the narrative
The most widely reported incidents include the 2024 robocall cloning Biden’s voice ahead of the New Hampshire primary and a 2023 case where a Chicago mayoral candidate’s voice was cloned on a fake news outlet; campaigns have also used AI imagery (for example an ad circulating from the DeSantis campaign featuring AI-generated images of Donald Trump hugging Anthony Fauci), and official accounts have at times posted tongue-in-cheek or provocative AI images—what some outlets call “slopaganda”—that blurred lines between parody and deception [1] [2] [5].
3. Scale, platforms and actors: not just state actors
Deepfakes in U.S. elections have come from a mix of domestic partisans, scammers and foreign adversaries; research and reporting documented spikes in AI-enabled disinformation campaigns during critical windows such as the weeks before Election Day and noted marketplaces and messaging platforms used to procure synthetic-media tools, while state actors remain a persistent concern [3] [6].
4. Measured impact: real harm, uneven effects
Experts and post‑election studies paint a mixed picture: deepfakes have the potential to erode trust and to suppress votes—risks underscored by democracy‑focused groups after January 6 and by legal scholars warning of manufactured fraud narratives—but empirical work after the 2024 cycle suggests many fakes were either obvious or failed to change outcomes, with some researchers concluding that political misinformation predates AI and that AI has amplified rather than created the problem [7] [8] [9] [10].
5. Platform responses: detection, labeling, rollback and limits
Platforms layered responses: some invested in detection tools, information panels and takedowns; others relaxed moderation thresholds, notably YouTube’s 2023 move to stop policing claims about past elections in some contexts, while layoffs of moderation staff and the sheer volume of synthetic content strained enforcement—meanwhile, new startups and in‑house AI tools emerged to identify manipulated media [11] [4] [3].
6. Legal and regulatory reaction: patchwork rules and proposals
Legislatures and advocates pursued multiple tracks: model state laws and statutes in California, Minnesota and Washington address deceptive election‑related synthetic media, federal bills have been proposed to criminalize certain pre‑election deepfakes, and attorneys general and regulators have scrutinized platforms (for example inquiries into X’s AI chatbot producing sexualized deepfakes), but observers note a lack of comprehensive federal regulation and many bills stalled in Congress [2] [12] [1].
7. Assessment and what comes next
The era since 2020 shows deepfakes are already part of the electoral toolkit but have not yet produced deterministic, election‑changing outcomes; the primary harms so far are localized deception, erosion of trust, and new avenues for scams, while defenses remain a blend of platform policy, detection technology, state laws and voter education—all of which will need scaling as generative AI becomes faster and harder to distinguish from reality [3] [9] [11] [10].