Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

Give us an example of disinformation distributed by Russians recently

Checked on November 23, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

Recent reporting documents multiple specific examples of Russian disinformation campaigns — from copycat news sites (the “Doppelganger” tactic) to the Pravda network pushing pro‑Kremlin narratives into English‑language sites and targeted social‑media video operations such as Storm‑1516 — that aim to influence elections, public opinion on Ukraine, and AI training data [1] [2] [3]. Authorities and analysts warn these campaigns now often use AI, botnets and networks of fringe and mainstream sites to amplify messages and may try to seed large language models with pro‑Russia content [2] [4].

1. A concrete recent example: the Pravda network and mass linking

Investigations by the Institute for Strategic Dialogue and reporting in The Guardian describe a pro‑Kremlin operation dubbed the “Pravda network,” where hundreds of English‑language websites — including mainstream outlets and fringe blogs — have been linking to and treating pro‑Kremlin articles as credible, dramatically increasing the reach and perceived legitimacy of those narratives [2]. The study found more than 80% of the citations treated the network as credible and experts warned the operation could be intended to seed pro‑Russia content into AI training datasets [2].

2. Deepfake and fabricated video campaigns: Storm‑1516 and the Zelensky claims

BBC Verify reporting points to a named Russian disinformation group, Storm‑1516, as a probable source of fabricated video content making false claims about Ukraine’s president and other fabricated events; BBC Verify said independent researchers identified the group as likely responsible [3]. This illustrates a tactic of producing falsified audiovisual material to undermine Western support for Ukraine and spread confusion [3].

3. Electoral meddling via copycat sites and covert networks

EU reporting and The Guardian cite tactics such as the “Doppelganger” campaign — copycat versions of established media sites used to promulgate anti‑Western narratives — and allege Russia played a role in large‑scale interference that in one case contributed to an annulled election (Romania) after declassified intelligence cited cyber‑attacks on electoral systems and social‑media meddling [1]. The BBC also uncovered a secret Russian‑funded network that attempted to disrupt an eastern European election and used paid operatives to influence polling and interviews in Moldova [5].

4. AI and “LLM grooming”: a new amplification vector

Multiple reports warn Russia (or pro‑Kremlin actors) may be deliberately flooding the internet with pro‑Russia content to influence training data for large language models, a process critics call “LLM grooming.” The Guardian and academic commentary note concerns that chatbots have at times echoed Russian disinformation and that some actors are trying to seed AI with these narratives [2] [4]. Australia’s intelligence chief and other analysts have similarly raised alarms about AI enabling faster, more convincing disinformation [6] [4].

5. Methods and scale: botnets, paid operatives, and mainstream amplification

Reporting across outlets and think‑tanks documents a blended toolkit: automated botnets to amplify hashtags and trending topics on platforms like X/Twitter, networks of hundreds of websites that republish and link to pro‑Kremlin content, covertly funded local operatives who conduct biased polling or interviews, and pseudo‑academic outlets used to lend authority [2] [5] [4] [7]. The U.S. and EU have responded with seizures, charges and proposals for new centres to counter these hybrid attacks [8] [1] [9].

6. Competing interpretations and limitations in reporting

Not all reporting treats every flagged network as directly controlled by the Kremlin; some analyses describe “pro‑Kremlin” or Russian‑funded networks rather than state‑operated units, and Russian officials routinely deny involvement and cast such claims as political [5] [10]. Available sources do not always provide full forensic proof of Kremlin direction in every case; in some instances sources document the activity and funding links, while in others they note patterns and expert attribution without public disclosure of all intelligence [2] [5].

7. Why this matters and what to watch next

The pattern across these cases is strategic: sow doubt about Ukraine and Western institutions, polarise domestic politics in target countries, and build persistent online reservoirs of pro‑Kremlin material that can be amplified or folded into AI systems [1] [2] [4]. Watch investigations by EUvsDisinfo, national intelligence disclosures and journalistic probes for evolving attributions, and monitor platform takedowns and official sanctions as indicators of scale and state linkage [11] [8] [9].

Limitations: This summary relies only on the supplied reporting; where sources do not specify operatives’ chain of command or present definitive state attribution, I note that nuance rather than asserting conclusions beyond what the reporting states [2] [5].

Want to dive deeper?
What is a recent verified example of Russian disinformation and how was it spread?
Which social media platforms have been used to amplify Russian disinformation in 2024–2025?
How do fact-checkers verify and debunk Russian-origin disinformation campaigns?
What motives and strategic goals underpin recent Russian disinformation efforts?
How can individuals spot telltale signs of Russian-origin false narratives online?