Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: Can old protest footage be used to manipulate public opinion in 2025?
Executive Summary
Old protest footage can and has been used to manipulate public opinion in 2025 by being recycled, reframed, or synthetically enhanced to appear current or more dramatic than reality. Empirical studies, fact-check investigations, and reporting on AI advances converge: reused footage circulated out of context and increasingly sophisticated AI tools that generate fake crowds and imagery raise the plausibility and reach of such manipulations [1] [2] [3]. Verification tools exist but have known limits, making contextual scrutiny and multi-source corroboration essential for assessing whether protest footage is genuine and timely [4] [5].
1. What the claim actually says — and what evidence supports it
The core claim is that old protest footage can be repurposed to shape opinions in 2025, either by presenting past scenes as current events or by combining archival clips with synthetic inserts to exaggerate impact. A 2025 study found deepfakes carry credibility comparable to other misinformation formats, suggesting visual manipulations hold persuasive power similar to fabricated text or audio [1]. Independent fact-checking has documented instances where demonstrative footage from 2024 was shared as if it were current, directly illustrating the reuse tactic [2]. Reporting on fake campaign content also underscores the political motive to weaponize visual media [6].
2. Why modern AI makes recycled footage more dangerous and scalable
AI capabilities in 2025 increasingly enable creators to augment or synthesize crowd scenes, alter timestamps, and blend archival clips with generated elements to create convincing composites. Reporting from October 2025 highlights improving AI in fabricating crowds, which lowers the barrier to producing visually believable scenes of unrest or support [3]. Researchers at Citizen Lab documented state-linked campaigns that used AI-generated imagery to inflame tensions, showing that actors with technical resources can amplify the reach and emotional force of manipulated protest footage [7]. These developments raise the risk of scaled, targeted manipulations.
3. Real-world examples showing old footage used out of context
Fact-check investigations in 2025 found specific cases where protest videos from prior years were circulated as current events, demonstrating the tactic’s real-world effectiveness. One verification concluded footage shared as a present-day demonstration was actually from 2024, causing misperceptions among audiences who lacked tools or time to investigate [2]. Parallel reporting on political campaigns manufactured or circulated fabricated videos and quotes during 2025 election cycles, showing how visual material—whether emergent AI deepfakes or repurposed archives—can be deployed strategically to influence voters [6].
4. How credible are visual manipulations compared with other misinformation?
A peer-reviewed study released in April 2025 assessed political deepfakes and found they are approximately as credible to audiences as misinformation delivered via text or audio, contradicting the assumption that videos are inherently more trusted or easier to refute [1]. This parity means that recycled footage—especially when paired with persuasive captions or amplification by influencers—can be as effective at shaping beliefs as coordinated text-based campaigns. The finding underscores visual media’s unique emotional impact plus increasing synthetic realism, making old footage a valuable asset for manipulators.
5. Verification tools exist — but they’re imperfect and easily outpaced
Guides and toolkits in late 2025 outline reverse-image searches, frame-by-frame metadata checks, and provenance tracing as core verification methods, but experts warn these tools have limitations against sophisticated edits and generative overlays [4]. AI detection systems can flag synthetic artifacts, yet researchers note detection tools sometimes fail and carry biases, creating false reassurance or missed manipulations [5]. Organizations like WITNESS emphasize training human rights documenters in adapted verification practices, showing that technical checks must be paired with documentary rigor to counter misuse [8].
6. Who uses these tactics — motives and possible agendas
Evidence points to a range of actors: political campaigns, authoritarian-aligned networks, and state-linked operations have all been implicated in creating or amplifying manipulated visual content to stoke unrest or delegitimize opponents [6] [7]. Media reports and research suggest motives include voter influence, international destabilization, and narrative control. Because sources vary, audiences should treat viral protest footage with skepticism about provenance and motive, noting who benefits from a particular framing and whether amplification patterns match coordinated campaigns [7] [6].
7. What this means for public opinion and democratic processes
Recycling or synthetically enhancing protest footage can reshape what citizens perceive as civic consensus, escalate tensions, and alter electoral dynamics by suggesting larger or more violent movements than actually exist. The convergence of documented out-of-context reuse [2], study findings on visual credibility [1], and AI-driven scaling of fake crowds [3] indicates a credible pathway for manipulation in 2025. The net effect is an environment where visual evidence no longer guarantees authenticity, increasing the informational burden on voters, journalists, and platforms to verify claims.
8. Practical takeaways: verification, transparency, and institutional responses
Given the documented threats and the imperfect state of detection tools, the most effective defenses combine rapid provenance checks, cross-source corroboration, and institutional transparency from platforms and newsrooms. Practical steps include reverse searches, consulting fact-check investigations, and demanding metadata or eyewitness corroboration; civic organizations and news outlets should prioritize verified sourcing and label uncertainty publicly [4] [8]. Policymakers and platforms must invest in detection research and disclosure standards because technical tools alone cannot fully prevent the strategic reuse or synthetic enhancement of protest footage [5] [3].