How have deepfakes been used in medical and health-related scams?
Executive summary
Deepfakes have been weaponized across healthcare to impersonate trusted clinicians, promote unproven products, manipulate medical images and carry out social‑engineering attacks that steal data or prescriptions, producing both financial harm and potential clinical danger [1] [2] [3]. Reporting across medical journals, cybersecurity outlets and mainstream press shows an ecosystem where convincing synthetic audio/video and tampered clinical images are amplified by social platforms and telemedicine channels to lend credibility to scams [4] [5] [6].
1. Deepfake clinician endorsements: fake credibility for bogus products
Scammers routinely stitch or synthesize video and audio of well‑known doctors to make it appear they endorse supplements, “miracle” cures or unapproved treatments, a tactic documented in BMJ investigations and mainstream news showing doctors’ likenesses used to sell ineffective products on social media [1] [2] [7]. The payoff is primarily financial: bad actors exploit the halo of professional authority to boost conversions and evade user skepticism, while platforms’ slow takedown processes let harmful ads propagate for weeks or months [6] [2].
2. Voice deepfakes and social engineering inside healthcare operations
Audio cloning enables attackers to impersonate executives or clinicians in phone calls, tricking staff into releasing patient data, changing orders or initiating wire transfers, a scenario warned about by cybersecurity leaders at care centers and analyst commentaries on the sector’s social‑engineering risk [5] [8] [9]. Because healthcare relies heavily on verbal instructions and trust in authority, cloned voices lower suspicion and can accelerate fraud or operational disruption—sometimes with life‑threatening stakes if clinical orders are altered or delayed [9] [8].
3. Deepfaked medical imagery: tampering with diagnosis and records
Generative models can insert or remove features in radiology and pathology images, creating a pathway to insurance fraud, misdiagnosis or undermining clinical evidence, a risk explored in technical papers that document how GANs and diffusion techniques can corrupt images stored in medical archives [3]. While some research highlights potential benign uses like data augmentation, the same tools make image‑level tampering practicable for attackers seeking to falsify claims or manipulate care decisions [3].
4. Disinformation campaigns that erode public trust and health behaviors
Beyond direct sales and fraud, deepfakes spread health misinformation—fabricated public‑health pronouncements or doctored expert commentary—that can encourage rejection of evidence‑based treatments or sow confusion during crises, a dynamic chronicled in reporting on AI‑driven health disinformation and deepfake videos of experts on social platforms [10] [11] [12]. The strategic value for malign actors is not only immediate profit but the longer‑term erosion of trust in institutions, which amplifies future exploitation [10].
5. Platform amplification and accessibility: why the problem scales
Advances in diffusion models and open‑source tools mean convincing synthetic content can be produced quickly with modest input, and social platforms’ algorithms can rapidly amplify emotionally framed medical claims, enabling low‑effort, high‑reach scams that impersonate clinicians or exploit telemedicine channels [10] [11] [2]. Investigations show such content often persists online until reported by the targeted professional, underscoring weak detection and enforcement incentives for platforms [13] [2].
6. Counterarguments, mitigation and where reporting is limited
Experts and industry pieces stress detection, clinician verification, and staff training as mitigations, and some argue that technical safeguards and platform policy will blunt the worst abuses, though evidence of consistent, effective intervention is thin in current reporting [9] [6]. The sources document harms and plausible attacks but do not provide comprehensive, quantified prevalence of successful clinical harm versus thwarted attempts; where hard data are missing, reporting tends to rely on case studies, expert warnings and investigative anecdotes rather than large, representative incident datasets [1] [7] [5].