Has CNN or Dr. Sanjay Gupta publicly addressed deepfake ads using journalists' likenesses elsewhere?

Checked on January 16, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

CNN and Dr. Sanjay Gupta have publicly and repeatedly called out AI-generated “deepfake” advertisements that misuse journalists’ likenesses, using multiple CNN platforms — on-air video, written coverage and podcasts — to denounce the scams and warn the public [1][2][3]. Gupta has described the phenomenon as longstanding but escalating in quality, named at least one instance that featured him and colleague Anderson Cooper, and urged greater platform oversight and public vigilance [3][4][5].

1. CNN published on-air and online reporting documenting Gupta’s denouncement

CNN ran video and written pieces in which Dr. Sanjay Gupta explicitly identified and denounced AI-created ads that used his face and voice to sell bogus health cures, with a July 31, 2025 CNN report carrying the headline that Gupta “speaks out” after scammers used his likeness to promote fake health products [1][2]. Those pieces present the network’s medical correspondent as a direct subject of the scam and frame the story as both consumer fraud and a journalistic integrity issue [1][2].

2. Gupta has discussed the problem in podcasts and put it in historical context

Beyond short-form video and news stories, Gupta addressed the deepfake ads at length on CNN podcasts including “Terms of Service” and “Chasing Life,” where he explained that impersonation scams using his image have occurred for years but that AI has recently increased the quality and reach of those fraudulent ads [3][4]. In those conversations he described specific examples circulating online, made clear that the endorsements were fake, and warned listeners that social media is now a primary channel where adults seek health information — amplifying the potential harm [3].

3. CNN coverage cites colleagues and shows the misuse extends to other in-house journalists

Gupta’s statements on the podcast and in other CNN pieces note that the deepfake audio and video also impersonated colleague Anderson Cooper in at least one clip, illustrating that the misuse is not limited to a single individual and that staff across the network can be targets [4]. CNN’s reporting framed the issue both as a personal affront to named journalists and as a broader consumer-protection problem that networks and platforms must reckon with [1][4].

4. Third-party outlets amplified and summarized CNN’s reporting, sometimes adding calls for tech accountability

International and niche news sites republished or summarized CNN’s coverage, echoing Gupta’s condemnation and adding calls for stronger platform oversight and consumer education; for example, outlets summarized Gupta’s call for better platform safeguards and public verification of ads [5][6]. Those republications underline how a major outlet’s internal reporting can drive wider media coverage, but they also introduce a secondary layer of aggregation where editorial slant and emphasis can vary [5][6].

5. What the sources do not show — limits of available reporting

The supplied reporting documents CNN and Dr. Gupta publicly addressing deepfake ads that used Gupta’s likeness and references at least one instance involving Anderson Cooper, but these sources do not provide a comprehensive catalog of every journalist whose likeness has been used by deepfake scammers nor do they show CNN issuing industry-wide policy proposals beyond urging oversight [2][3][1]. The articles and podcast segments focus on specific incidents involving CNN staff and on warning the public; they do not, in the presented material, document CNN pursuing legal action or disclose platform takedown outcomes in detail [3][1].

6. Takeaway: yes — but narrowly documented in the available reporting

In sum, CNN and Dr. Sanjay Gupta have publicly addressed the problem of deepfake ads using journalists’ likenesses, principally by exposing and denouncing ads that used Gupta’s image and voice (and a clip implicating Anderson Cooper) across CNN video, written stories and podcasts [1][3][4]. The coverage frames the issue as both personal victimization and a systemic consumer-harm risk, and third-party outlets have amplified those warnings; however, the provided sources do not extend the record to show a broader CNN campaign addressing deepfakes of many journalists or detailed outcomes from platform enforcement beyond calls for better oversight [2][5].

Want to dive deeper?
How have social media platforms responded to takedown requests for deepfake ads featuring public figures?
What legal remedies exist for journalists whose likenesses are used in commercial deepfake scams?
How effective are current deepfake-detection tools at identifying AI-generated health-advertising videos?