What documented instances are there of AI‑generated ads using celebrity doctors to sell supplements?

Checked on January 30, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

A growing body of reporting documents supplements">AI‑generated ads that use doctored videos or images of real doctors and celebrities to promote supplements, with investigators finding hundreds to thousands of examples across platforms and naming repeat campaigns and actors; fact‑checking groups, cybersecurity firms and mainstream media have traced many of these ads to commercial supplement drives even as platforms and vendors dispute responsibility [1] [2] [3]. The documented cases range from broad research sweeps that catalog thousands of fake endorsements to specific incidents where named physicians and institutions were impersonated to hawk weight‑loss or diabetes supplements [2] [4] [5].

1. The Full Fact / Wellness Nest revelations: hundreds of deepfakes pushing supplements

A Full Fact investigation found hundreds of videos on TikTok and other platforms that impersonated doctors and academics and steered viewers to products sold by a company called Wellness Nest, using reworked footage and audio to make real experts appear to endorse menopause supplements like probiotics and Himalayan shilajit; the reporting includes interviews with targeted academics such as Professor Taylor‑Robinson who discovered his likeness being used without permission [1] [3].

2. Large‑scale scans: Bitdefender, ESET and the scope of the campaigns

Security researchers from Bitdefender reported observing more than 1,000 deepfake videos representing over 40 fake supplements and said campaigns exploited cloned celebrities and health experts to push bogus “miracle cures” on Meta platforms, while ESET located dozens of TikTok and Instagram accounts using AI‑generated doctor avatars to market wellness products—evidence that the problem is industrial in scale rather than isolated [2] [6] [7].

3. Named victims and institutional warnings: BMJ, Baker Institute and Dr Karl

Investigations and industry reporting identified concrete victims: a BMJ inquiry documented deepfakes of three prominent UK doctors used to shill products, and the Baker Heart and Diabetes Institute publicly warned that videos purporting to show their clinicians promoting a diabetes supplement were fake; separately, Australian outlets reported an April 2024 campaign that used a deepfake image of Dr Karl Kruszelnicki to sell pills on Facebook [4] [8].

4. Local consumer harm: weight‑loss scams, Oprah impersonations and individual complaints

Local news and consumer groups documented cases where ads using celebrity and doctor deepfakes led to financial loss: a Central Florida consumer warning cited Facebook ads that used celebrities and physicians (one ad even claiming Oprah endorsement) to sell weight‑loss supplements and reported individual losses, illustrating how these scams translate to real monetary harm [5] [9].

5. Specific product examples and disputed vendor responsibility

Reporting highlighted named products and companies of concern—for example, ABC investigated Glyco Balance and tried (unsuccessfully) to confirm manufacturer Vellec Group’s involvement, while regulators like Australia’s TGA said they were assessing claims after false “approval” assertions surfaced—showing that ad creatives, landing pages and claimed manufacturers sometimes cannot be reliably traced to accountable sellers [10].

6. Platforms, takedowns and the contested response

Platforms have removed some flagged content—TikTok reported proactive removal rates and said it took action on accounts flagged by watchdogs, and Meta has said it removed ads violating its standards—but researchers and journalists note uneven enforcement and rapid reappearance of similar accounts, prompting calls from fact‑checkers and academics for faster, more consistent moderation [7] [11] [1].

7. Why this matters and what remains unclear

The documented instances show a pattern: AI tools make it easy to synthesize convincing endorsements that exploit trust in medical professionals and celebrities, increasing the likelihood consumers will buy ineffective or unsafe supplements; however, reporting frequently cannot prove which corporate actors commissioned particular deepfakes, and in many cases platform provenance and payment trails remain opaque, limiting attribution and legal recourse [2] [10] [12].

Want to dive deeper?
Which specific platforms have the highest number of reported AI deepfake supplement ads and what enforcement actions have they taken?
What legal avenues have impersonated doctors pursued against companies or platforms hosting AI‑deepfake ads?
How do cybersecurity researchers detect and attribute AI‑generated medical deepfakes on social media?