How are deepfakes and AI‑generated endorsements being used in current supplement scam campaigns?

Checked on January 19, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Deepfakes and AI-generated endorsements are being weaponized to create convincing advertisements for unproven supplements and weight‑loss products, using celebrity impersonations, forged medical experts, and multilingual content to build false credibility and drive purchases [1][2][3]. Researchers and consumer watchdogs report these tactics are widespread across major social platforms and tied to surges in scams around hot topics like GLP‑1 drugs [1][2].

1. The playbook: synthetic media as a trust shortcut

Scammers build social pages and ads that blend flashy imagery with AI‑generated video and audio to simulate endorsements, leveraging the veneer of a familiar face to shortcut skepticism and create perceived legitimacy around “miracle” supplements [1]. These campaigns recycle classic health‑scam narratives—fast results, secret ingredients—while replacing amateur actors with convincing synthetic clips to boost click‑through and conversion rates, a technique noted in investigations and lab reports [1][4].

2. Celebrities as clickbait: deepfaked faces and names

A common tactic is putting celebrity likenesses into promotional clips; victims and trackers have seen fake videos purporting to show figures like Oprah endorsing products, and celebrities themselves report constant takedown fights against AI fakes [2]. Consumer alerts document widespread use of celebrity deepfakes specifically in weight‑loss and GLP‑1–adjacent supplement ads, amplifying reach because audiences are more likely to trust a familiar public figure [2][5].

3. Faux clinical authority: deepfake doctors and health experts

Scammers increasingly impersonate clinicians and public‑health figures with AI‑generated videos that make false medical claims, set up phony pharmacies, or direct buyers to dubious treatments—moves that exploit the public’s deference to medical authority and have been flagged by investigative reporting [3][5][4]. These fake experts are used not only to sell supplements but to mimic endorsements for alleged alternatives to legitimate GLP‑1 medications [2].

4. Scale, targeting and multilingual reach

Reporting shows operators run thousands of pages and over a thousand distinct deepfake videos, distributing ads in dozens of languages to capture global audiences; researchers have observed campaigns in at least 20 European languages and many more, indicating sophisticated reuse and scaling of assets across markets [1]. The combination of low ad spend and high distribution on social platforms makes it cheap to reach millions, magnifying harm quickly [1].

5. Monetization: affiliate links, sham pharmacies and product funnels

Investigations trace many deepfake ad funnels to affiliate marketing schemes or third‑party sellers that benefit when consumers click through and buy suspect supplements—sometimes involving sham online pharmacies that mimic legitimate vendors to capture payment data and recurring subscriptions [4][3]. Firms named in reportage typically deny direct involvement, framing the relationship as unaffiliated affiliates or bad actors misusing links, a point that complicates enforcement [4].

6. Platforms, detection, and the “whack‑a‑mole” problem

Major platforms like Meta are cited as a primary distribution vector for these AI ads and say they use detection tools including celebrity facial recognition to remove deepfakes, but researchers and victims describe a persistent churn of takedowns followed by reposts—what affected celebrities call “whack‑a‑mole” legal and moderation battles [1][3]. Cybersecurity analysts warn that AI also accelerates personalized targeting by harvesting public images and data to craft more convincing scams [6].

7. Harms and contested narratives

The harms include financial loss, erosion of trust in legitimate medical advice, and potential physical danger from unregulated products; consumer watchdogs and investigative reporters emphasize both the scale of deception and the emotional damage to victims [3][2]. At the same time, some companies and platform defenders argue they are deploying technical countermeasures and that not all disputed promotions are traceable to centralized operators—illustrating a tension between detection capabilities and the diffuse economics of affiliate marketing [4][1].

8. Limits of current reporting and open questions

Existing sources document tactics, scale, and examples but do not fully map the financial networks behind every campaign or quantify conversion rates from deepfake ads to actual purchases, leaving gaps about which actors profit most and how often consumers are physically harmed by bought products; reporting acknowledges these investigative limits [1][4]. Continued tracking by cybersecurity labs, regulators and journalism is required to trace payouts, expose repeat offenders, and test platform detection effectiveness over time [6].

Want to dive deeper?
How do affiliate marketing networks enable the spread of AI‑driven supplement scams?
What technical methods are platforms using to detect and remove deepfake ads, and how effective are they?
What legal actions have celebrities and doctors taken against AI deepfakes used in health product advertising?