What evidence have fact-checkers found about AI-generated videos of protests since 2024?

Checked on January 16, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Since 2024 fact‑checkers have repeatedly identified AI‑generated and wholly synthetic videos circulating around protests—ranging from Iran to Los Angeles and Venezuela—using a mix of forensic clues, platform metadata and admissions by creators to label content as fake or misleading [1] [2] [3]. They have also documented a contested dynamic in which the same AI tools that create deepfakes are being used by citizens and platforms to debunk them, while platform affordances and political actors sometimes amplify the false clips [4] [5].

1. Catalog of concrete examples fact‑checked by journalists and NGOs

Major outlets and verification bodies have published multiple case studies: NewsGuard flagged seven distinct AI‑generated videos about the 2026 Iranian protests that together drew millions of views [2], AFP and BBC teams traced viral Venezuelan and Iranian clips to accounts that regularly post synthetic content and to older unrelated footage [3] [6], Full Fact determined a viral Portsmouth protest clip to be a deepfake built on a September 2024 image [7], and outlets covering U.S. unrest documented at least one AI‑generated clip and many repurposed old videos around ICE and LA protests [8] [4].

2. How fact‑checkers identify AI‑generated protest footage

Verification teams have relied on a mix of visual forensics—looking for implausible camera angles, patterned lighting and artefacts like changing text or ghosting—cross‑referencing timestamps and geolocation, platform metadata and reverse‑image searches; Full Fact pointed to an inexplicable change in flag text as a tell, and DW highlighted bird’s‑eye views, indistinct faces and patterned flashlights as common AI markers [7] [1]. Fact‑checkers also use provenance checks and earlier uploads: BBC Verify and AFP have traced many viral clips to unrelated older footage or to accounts known for synthetic content [6] [3].

3. Admissions, watermarks and crowdsourced signals

In several high‑profile cases the originators either explicitly used AI tools or left identifiable traces: an Instagram user admitted creating an Iran protest clip with AI, a Forbes piece noted an “AI‑generated” watermark on a viral Iranian “Trump Street” clip, and X/Twitter community notes and crowdsourced fact checks have flagged misleading videos [1] [9] [3]. These admissions and platform annotations give fact‑checkers direct evidence that some widely shared protest clips are synthetic [3].

4. The double‑edged role of AI in verification and amplification

Researchers and newsrooms report a paradox: AI chatbots and image tools are being used both to fabricate vivid protest scenes and by users to check claims; TIME and WIRED documented Grok and other AIs serving as de facto fact‑checkers for users even when they sometimes amplified errors, and outlets warn of AI models confidently asserting false provenance [4] [5]. That dynamic has real consequences: authoritative‑seeming AI replies can give otherwise dubious posts greater reach, while professional verification still relies on manual cross‑checks [5] [4].

5. Patterns in who produces and who benefits from synthetic protest videos

Fact‑check reports show synthetic protest clips come from varied actors—both domestic and international, pro‑ and anti‑regime sources—and often reinforce preexisting political narratives, whether to exaggerate violence, claim fabricated victories, or stigmatize opponents; NewsGuard and The Independent emphasize that both pro‑ and anti‑government actors have produced AI clips and that they typically confirm partisan frames [2] [10]. Verification coverage also flags the incentive structure: sensational AI clips spread fast and can be monetized or wielded politically before debunking takes hold [10] [3].

6. Limits, ongoing gaps and what fact‑checkers cannot yet settle

While fact‑checkers have solidly identified many synthetic protest videos and developed repeatable detection cues, reporting makes clear there are limits: some viral clips’ origins remain unconfirmed despite flags and provenance work, AI detection is an arms race, and the presence of old or repurposed real footage complicates attributions [3] [6]. Academic work notes evolving techniques and uneven fact‑checking capacity across countries, meaning the catalogue of fake protest videos is almost certainly incomplete [11].

Want to dive deeper?
What technical markers do fact‑checkers use to distinguish AI‑generated protest videos from authentic footage?
How have platforms like X/Twitter and TikTok changed moderation or labelling policies in response to AI deepfakes of protests?
Which political actors or networks have been documented amplifying AI‑generated protest videos since 2024?