What are documented examples of AI deepfakes appearing on YouTube in 2023–2025?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive summary
From late 2023 through 2025, multiple documented instances of AI-generated “deepfake” video and audio surfaced on or circulated via YouTube — ranging from high‑engagement political and celebrity fakes that fact‑checkers debunked to targeted scams using a CEO’s synthetic likeness — prompting platform policy changes and new detection tools [1] [2] [3] [4]. The record shows both concrete examples of misuse and an intensifying, contested industry response about how to detect, label and remove these synthetic clones [5] [6].
1. Documented high‑visibility deepfakes that appeared on or spread via YouTube and allied platforms
Fact‑checking outlets compiled multiple high‑profile cases in 2024–2025: viral clips that used AI to fabricate sports and political speech — such as a March 2025 fake of Mohamed Salah framed as a post‑match comment and an AI‑generated kung‑fu style fight between Nigerian politicians that fact‑checkers identified as synthetic — examples explicitly cited in profiles of deepfakes fact‑checked in 2025 [2]. Earlier pressure around synthetic depictions helped drive YouTube’s 2023 announcements to label AI‑generated videos and offer takedown paths for people depicted without consent [1] [7].
2. Targeted, criminal uses documented on YouTube: phishing and scams
Security reporting documented that threat actors used AI‑generated videos of YouTube CEO Neal Mohan as part of private‑video phishing campaigns aimed at creators, sending falsified internal messages to harvest credentials or install malware — a concrete instance of deepfakes weaponized for platform fraud rather than mass disinformation [3]. That case illustrates how synthetic likenesses were not just viral hoaxes but tools in targeted social engineering campaigns observed in 2025 [3].
3. Non‑consensual and political misuse that shaped public concern
Journalism and NGO commentary highlighted recurring categories: non‑consensual sexual content targeting private individuals, and politically sensitive fakes that can depict leaders saying or doing things they never did — problems explicitly raised when YouTube rolled out labeling and takedown options in 2023 because of the risks posed by lifelike synthetics [1] [7]. Fact‑checking networks also traced AI‑generated fabrications of public figures that circulated broadly on social platforms in 2025, underscoring the real world destabilization risks [2] [8].
4. YouTube’s documented countermeasures, collaborations and policy moves
YouTube moved from disclosure rules in late 2023 to pilots of “likeness detection” and Content‑ID‑style tools that let creators flag unauthorized AI clones, and it participated in industry and legislative efforts including endorsing anti‑deepfake legislation and piloting tools with agencies like CAA to give celebrities early access to protection mechanisms [5] [4] [9] [6]. These steps were reported as concrete responses to documented misuse, with the platform adding removal request channels and developer pilots to identify synthetic impersonations [7] [4].
5. Contested tradeoffs, technical limits and hidden agendas in the response
Coverage flagged tensions: YouTube’s likeness detection required creators to opt in and raised privacy alarms because its systems could rely on biometric data that might be used to further train company models, according to experts who warned about Google’s vertical interest in training data [4]. Platforms also caution the tools won’t catch every fake — low‑resolution or heavily altered videos can escape detection — and critics warn about burdens placed on creators to verify identity to gain protection [4] [10]. Industry lobbying and platform commercial interests are an implicit backdrop to policy shifts: endorsements of laws like the No Fakes Act and partnerships with rights holders indicate both legal and business incentives shaping responses [6] [9].
6. What reporting does not (yet) settle and where documentation is thin
Public reporting documents specific viral fakes, phishing campaigns using synthetic CEOs, and platform policy reactions, but available reporting in this collection does not provide a comprehensive catalogue of every deepfake uploaded to YouTube between 2023–2025 or independent audits of detection tool efficacy; thus, while individual cases are well documented, the full scale and success rate of both abuse and remediation remain underreported in these sources [2] [3] [4].