Russian disinformation

Checked on December 10, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Russian disinformation in 2024–25 is a coordinated, multi‑vector campaign that uses state media, proxy websites, Telegram channels, and sophisticated technical tools to target Europe, the Americas and the Global South; studies found networks like the “Pravda ecosystem” produced hundreds of thousands of publications across dozens of countries and hundreds of domains [1] [2]. Western governments and watchdogs report operations that include deepfakes, generative‑AI content and data‑manipulation campaigns (for example “Skvozniak”) while counter‑efforts rely on sanctions, rapid attribution and takedowns [3] [4] [1].

1. How the Kremlin’s playbook looks now: volume, platforms and narratives

Russia’s modern approach amplifies old Cold War templates with internet scale: a “firehose” of content distributed by state outlets, proxy sites and coordinated Telegram channels, seeded into mainstream and fringe English sites to gain credibility [5] [2] [1]. The U.S. State Department’s archived work documents persistent narrative templates and five “pillars” of the ecosystem — explicit acknowledgement from Western institutions that the effort is systemic and multi‑layered [6] [7]. Analysts say the aim is not always truth but disruption: to create confusion, polarise publics and weaken support for Ukraine and allied policies [4] [7].

2. New technical tools and newer risks: AI, deepfakes and data manipulation

Recent reporting and government statements single out generative AI and manipulated video as growing capabilities used to “flood social media,” and security researchers have documented campaigns that insert false public‑data to provoke fear [3] [4]. Defence‑sector reporting describes an operation named “Skvozniak” that falsifies energy and weather data to stoke blackout fears before winter 2025/26, while other campaigns impersonate official Telegram channels to publish fake orders — tactics that mix cyber, information and operational aims [4]. Researchers also warn about “LLM grooming” — attempts to seed AI training datasets with pro‑Kremlin content via mass publishing [2].

3. Geographic focus and audience tailoring

Open‑source studies show Kremlin networks disproportionately target former Soviet states, Central Europe, the Balkans and emerging Europe, tailoring language and narratives to exploit local grievances; Latin America and Africa are also noted as arenas for influence operations [1] [8]. The Guardian and think‑tank analysts documented how English‑language sites link to pro‑Kremlin pieces, increasing reach and the chance such narratives enter mainstream discourse [2]. The EU and national authorities have repeatedly highlighted campaigns aimed at electoral influence and public opinion in Poland and elsewhere [9] [10].

4. What Western responses look like — sanctions, attribution and policy debates

Governments have used fines and sanctions against individuals, channels and entities tied to disinformation, and officials publicly describe the threat as an “information war,” urging stronger responses [3] [10]. The U.S. Global Engagement Center and European institutions produced reports and resolutions to expose and sanction networks, while independent forensic teams (e.g., Bellingcat partners in Ukraine) focus on rapid detection and neutralisation of deepfakes and fake channels [11] [4] [6].

5. Disagreement and limits in the evidence base

Not all scholarship portrays Russia’s apparatus as monolithic or uniformly centralized; RAND earlier observed popular portrayals sometimes overstate how organized and well‑resourced the machine is, though it cautioned activity is harmful and evolving [12]. Defence analysts argue Kremlin narratives now resonate mainly within pro‑Russian echo chambers and parts of the Global South, noting Western counter‑capabilities have reduced Kremlin decision‑shaping power — a contrast to studies that stress continuing wide reach [4] [2].

6. Practical implications: what to watch next

Expect continued layering of cyber‑operations with information payloads timed to military or diplomatic events [4]. Watch for more AI‑enabled fabrications and attempts to seed LLM datasets via mass‑publishing networks, and for policy flashpoints where platform moderation changes could increase vulnerabilities [2] [10]. Available sources do not mention specific remediation playbooks beyond sanctions, attribution and takedowns; they do, however, emphasize multi‑actor responses from governments, civil society and tech partners [6] [11].

Limitations: this briefing uses only available reporting and government analyses supplied here; it does not include primary intelligence assessments not published in these sources. Sources disagree on scale versus effectiveness — RAND warns against exaggeration while multiple 2024–25 studies and news investigations document substantial, targeted campaigns [12] [1] [2].

Want to dive deeper?
What are the latest tactics Russian disinformation campaigns used in 2024–2025?
Which social media platforms are most exploited by Russian disinformation and how are they responding?
How do Russian state actors coordinate disinformation with proxy networks and bots?
What impact has Russian disinformation had on recent elections and public trust in NATO countries?
What tools and strategies do journalists and researchers use to detect and debunk Russian disinformation?