What evidence exists of AI-generated deepfakes using journalists’ voices during the Russia–Ukraine war and who produced them?
Executive summary
The documented record from 2022–2025 shows clear instances of AI-manipulated audio and video deployed during the Russia–Ukraine war — most famously a March 2022 deepfake of Ukrainian president Volodymyr Zelenskyy that featured distorted voice and speech — and investigators trace many of these operations to pro‑Russian actors and Kremlin-aligned disinformation networks that amplified them on Telegram, hacked broadcast feeds, and social platforms [1] [2] [3]. Reporting and research also show other AI‑generated impersonations — including audio deepfakes of politicians and a “deepfake” videoconference caller — but the open-source record does not robustly document systematic mass production of deepfakes specifically using the recorded voices of independent journalists [4] [5].
1. What concrete deepfakes emerged early in the war
The clearest, repeatedly cited case is the March 2022 video that showed a speech attributed to President Volodymyr Zelenskyy telling Ukrainians to surrender — observers pointed to visual artifacts and a gravelly, distorted voice as signs the clip was manipulated, and it was distributed via hacked Ukrainian media pages and Telegram channels [1] [6] [2] [3]. Multiple fact‑checking and research outlets treated that Zelenskyy clip as a prototypical example of AI‑enabled disinformation in the opening weeks of the invasion [2] [7]. Reporting also highlights a manipulated clip attributed to Vladimir Putin and other fabricated political audio circulated in the region [6] [3].
2. Who produced and amplified these deepfakes
Investigations and analyst reports attribute these early deepfakes to “pro‑Russian actors,” Kremlin‑aligned propaganda networks, and hacking operations that altered legitimate broadcast streams to insert manipulated content; researchers say the content was then amplified by pro‑Kremlin Telegram channels and networks like the so‑called “Bad Grammar” accounts traced back to Russia [2] [4] [8]. Official assessments and journalism describe a coordinated information‑space effort where hacks, Telegram amplification and sympathetic networks served the Kremlin’s interests in destabilizing Ukrainian resolve [2] [4].
3. Evidence about AI‑generated voices and who was impersonated
Beyond the Zelenskyy video’s altered audio, documented examples include at least one audio deepfake of a European politician (Michal Šimečka) and a case where a “deepfake” caller used a false identity to contact a U.S. senator’s office, showing the threat vector extends to voice impersonation as well as video [4] [5]. Scholarly reviews and DFRLab monitoring list multiple instances where manipulated voices and AI text were used to mimic officials and spread misleading claims; however, the sources do not provide systematic evidence of large-scale use of journalists’ recorded voices as targets in the same way elected officials were targeted [2] [4].
4. Quality, impact, and detection: why many attempts failed
Analysts and fact‑checkers emphasize that early wartime deepfakes were often low quality and detectable — the Zelenskyy clip’s visual jitter and odd audio timbre were noted widely — and quick debunking by Ukrainian officials and fact‑checkers limited their broader success, though some localized impact (including temporary confusion and TV chyron hacks) did occur [2] [9] [3]. Academic studies found that even poor deepfakes can erode trust in authentic media and that disinformation actors exploit unmoderated channels where debunking is slower [10] [4].
5. Limits of the public record and alternative interpretations
Open sources consistently identify pro‑Russian actors, hacked channels and Telegram amplification as the primary producers and spreaders [2] [8] [4], but reporting also warns that “deepfake” is sometimes used as a catch‑all insult and that not every suspicious clip was AI‑generated; some critiques argue that overstating AI’s role can obscure classic propaganda techniques and hacking [4] [7]. Importantly, the reviewed materials do not present comprehensive forensic catalogs proving widespread AI‑voice cloning of journalists specifically; absence of evidence in these sources is not evidence of absence, only a limit in the available reporting [2] [4].
6. Bottom line
The evidence base shows multiple AI‑enabled impersonations and at least one high‑profile audio‑visual deepfake (Zelenskyy) amplified by pro‑Russian actors and networks, with additional cases of audio impersonation of politicians and deceptive videoconference callers attributed to malign actors [1] [2] [4] [5]. The sources do not, however, document a proven, large‑scale campaign explicitly cloning the voices of independent journalists — the public record instead documents targeted fakes of political figures and infrastructure-enabled amplification by pro‑Kremlin channels [2] [4].