How have deepfakes and AI‑generated endorsements been used in online health product scams?

Checked on January 6, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Deepfakes and AI‑generated endorsements have become a prominent tool in online health product scams, where synthetic video, audio and images of real clinicians are used to sell unproven supplements and treatments [1] and to impersonate trusted medical figures to amplify credibility [2]. Investigations and health organisations have documented multiple campaigns — from fake TV‑doctor ads to videos of academic experts pitching diabetes supplements — that underline both the immediacy of the threat and the difficulty of policing it on social platforms [3] [4].

1. How scammers use synthetic clinicians to sell products

Fraudsters take existing footage or images of recognised doctors and remap words, facial movements or voices so the clinician appears to endorse a product — from “miracle” diabetes supplements to GLP‑1 drinkables and creams — and then pair that synthetic endorsement with fake certificates or guarantees to persuade viewers to buy [5] [6] [7]. Major outlets and medical journals have documented campaigns that recycle the likenesses of popular TV doctors and health experts across TikTok, Facebook, Instagram and X to market unproven remedies, sometimes re‑using a single speech clip across many fake ads [3] [8] [5].

2. Why deepfakes are effective and cheap to produce

The surge of such scams tracks advances in generative AI and diffusion models that made photorealistic imagery and voice synthesis fast and accessible; technologies that were once niche now enable convincing videos with minimal technical skill or computing power [2]. Openly available image‑generation models and voice tools lower the barrier to entry, and studies suggest many viewers struggle to reliably distinguish fabricated scientific commentary from real footage, which scammers exploit to monetize trust [2] [9].

3. Real incidents, victims and documented harms

Reporting and institutional alerts show concrete cases: Australian scammers used deepfake images of a high‑profile science communicator to sell pills in 2024, Diabetes Victoria flagged deepfake videos of Baker Heart and Diabetes Institute staff promoting supplements in 2024, and multiple prominent UK doctors have been deepfaked to hawk products, prompting takedown campaigns and complaints [4] [10] [3]. Beyond financial loss, experts warn these campaigns can cause medical harm by encouraging people to reject evidence‑based treatments or divulge sensitive health data through follow‑up phishing or bogus consultations [2] [11].

4. Platforms, takedowns and the tug‑of‑war over content

Social platforms have removed some identified deepfakes after complaints — TikTok, for example, took down videos weeks after a target complained — but moderation is inconsistent and slow relative to the rate of new uploads, and automated detection tools still struggle to keep pace with rapid generative improvements [5] [8]. Investigations by media organisations and health groups show creators often use multiple platforms and mirrored accounts to evade enforcement, and victims confront a long, uneven process to get harmful content removed [5] [1].

5. Motives, hidden agendas and the limits of current reporting

Financial profit is the clearest motive — selling unproven supplements or directing consumers to affiliate schemes — but experts also highlight secondary aims like sowing confusion, undermining trust in conventional care, and harvesting personal data for follow‑up scams, using highly personalised deepfake or phishing approaches [2] [11] [12]. Reporting reliably documents the tactics and several campaigns, but available sources do not provide a complete map of perpetrators, their networks, or the total scale of consumer harm, so the full scope remains uncertain [1] [13].

6. What commentators recommend and where responsibility sits

Medical journals, cybersecurity specialists and public health bodies call for combined responses: better public education to spot miracle claims, stronger platform enforcement and faster takedowns, improved AI‑detection tools, and legal or regulatory pressure on sellers who traffic in fraudulent health claims, while emphasizing verification of sources before acting on medical advice online [1] [7] [13]. Alternative viewpoints in the coverage note trade‑offs — overzealous takedowns risk censoring legitimate speech and automated detectors can misclassify content — underscoring that technical, legal and civic remedies must be carefully balanced [8] [2].

Want to dive deeper?
How do social platforms currently detect and remove AI‑generated deepfake health ads?
What legal remedies have physicians used to fight deepfakes of their likenesses promoting products?
Which detection tools and red flags most reliably identify AI‑generated medical endorsements?