What evidence exists that AI-generated likenesses of physicians are being used in supplement ads?

Checked on January 12, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Multiple credible news investigations and specialist outlets document concrete examples in which AI-generated or AI-altered likenesses of real physicians have been used to promote dietary supplements and other dubious treatments online, naming specific doctors and platforms where those videos and ads appeared [1] [2]. Reporting also shows platforms sometimes remove flagged accounts but that deceptive campaigns can persist across social media and e-commerce listings, and independent analyses explain the tactics and mechanics behind the campaigns [3] [2] [4].

1. Documented victim cases: named doctors and doctored endorsements

High-profile, verifiable incidents cited in national reporting include Dr. Gemma Newman’s Instagram warning after a TikTok video was altered to appear as though she promoted vitamin B12/beetroot capsules [1], Dr. Christopher Gardner’s voice cloned for YouTube videos pushing false advice to older adults [1], and endocrinologist Robert Lustig being impersonated to endorse an unapproved liquid weight‑loss product [2] [1]; investigative pieces also recount UK clinicians and public‑health figures whose recorded remarks were repurposed to advertise menopause or supplement remedies [5] [6].

2. Platforms and commercial channels where these ads appear

Reporting ties the deepfakes and AI‑generated clinician avatars to mainstream platforms and commercial channels: social sites such as TikTok, Facebook/Meta and YouTube host the videos and accounts [3] [7], while marketplaces including Amazon and Walmart have listed products advertised with fake clinician endorsements, and even sponsored Google search ads have been implicated [2] [1] [8].

3. Independent analyses of tactics: why and how scammers use AI likenesses

Security and technology analyses describe a pattern: scammers use generative AI and voice models to create convincing visual and audio endorsements, repurpose real footage, rotate pages/accounts to evade moderation, and exploit urgency or authority cues to sell “miracle” supplements or counterfeit products — tactics mapped in technical and consumer‑safety writeups [4] [7] [9]. Investigators suggest one motive is to build followings or monetizable channels and to add perceived medical authority to product claims [1] [4].

4. Platform responses, removals and gaps

Platforms have mixed records: outlets report that TikTok and other platforms have removed accounts flagged by journalists or advocacy groups under spam and deception rules [3], and spokespeople cite policies banning unauthorized medical likenesses or unsafe supplements [9]. But multiple reports also document campaigns that persisted after identification, and researchers warn that actors can relaunch content under new accounts or on other services, creating enforcement gaps [2] [4].

5. Scope, harms, and limits of existing evidence

The assembled reporting provides repeated, corroborated examples showing the phenomenon is real and recurring, and experts warn of patient harm from misleading dosing or false treatment claims [1] [9] [6]; however, the sources do not offer a precise, platform‑wide prevalence estimate or a comprehensive audit of all supplement ads, so conclusions about scale must rely on patterns and case studies rather than exhaustive data [4] [7]. Alternative views from platforms emphasize policy enforcement and removals, but independent investigators and impacted clinicians argue removals are incomplete and reactive [3] [2].

6. What the evidence proves and what remains uncertain

Collected reporting proves that AI‑generated and AI‑altered likenesses of real physicians have been used in supplement ads and that such content appears across social platforms and marketplaces, with named victims and product examples documented by major outlets [1] [2] [10]. What remains less certain in the provided material is the full prevalence, the precise supply chain of bad actors (ad networks, payment processors, vendors), and long‑term efficacy of platform countermeasures — areas where the reporting signals concern but does not supply comprehensive empirical totals [4] [3].

Want to dive deeper?
Which major supplement listings on Amazon or Walmart have been tied to AI‑generated doctor endorsements?
How do social platforms detect and remove AI‑generated impersonations of medical professionals?
What legal remedies are available to physicians whose likenesses are used without consent in commercial deepfakes?