Has Dr. Jennifer Ashton publicly commented on AI deepfake misuse of her image for health product ads?
Executive summary
There is extensive reporting showing that deepfake videos and AI-altered ads have used Dr. Jennifer Ashton’s likeness to falsely endorse weight‑loss supplements and other products [1][2], but the sources provided contain no verifiable record of Dr. Ashton herself publicly commenting on that misuse; the reporting instead documents the scams and broader expert and victim responses [1][2][3].
1. What the reporting documents about misuse of Dr. Ashton’s image
Multiple investigative write‑ups and scam‑watch pages say Dr. Jennifer Ashton’s image and voice have been misappropriated in AI‑generated ads for products like “Burn Slim,” “LipoLess” and other gummy or gelatin‑based weight loss remedies, and they explicitly state she never endorsed those items [1][2]. These accounts describe the modus operandi—clips pulled from TV appearances (for example Good Morning America or similar footage) and then digitally altered with AI to fabricate endorsements—framing the misuse as part of a wider trend of cheap, effective deepfake tools enabling fraudulent advertising [1][2][3].
2. What the sources say about victims, victims’ responses, and other public denouncements
Reporting about deepfake medical ads dwells on victims’ experiences and on other public figures who have pushed back: investigations into similar scams show people were defrauded after trusting doctored endorsements, and high‑profile clinicians and personalities (for instance Gayle King and other media figures) have publicly denounced deepfake endorsements when they occurred, according to CBS and related outlets [4]. Broad coverage of these schemes notes that affected doctors have had to alert patients and the public about fake ads and that platform removals have sometimes followed journalistic exposure [5][4].
3. The broader journalism and research context that frames the Ashton case
Experts and outlets covering the phenomenon emphasize that deepfakes of doctors are rising on social media, that cheap tools make celebrity and expert impersonation easier, and that people continue to be persuaded by fabricated videos even after warnings—findings documented across fact checks, academic experiments and investigative pieces [3][6][7]. Platforms’ takedown actions and government interest in regulating deepfakes are also part of the record, underscoring systemic problems beyond any single impersonation [8][9].
4. Direct evidence of Dr. Ashton’s public comment — what’s present and what’s missing
None of the supplied sources include a direct quote, public statement, social‑media post, press release or interview in which Dr. Jennifer Ashton addresses or condemns the specific AI‑driven misuse of her image for health product ads; the pieces reviewed either assert she never endorsed the products [1][2] or use her as an example of a clinician whose likeness has been weaponized [9], but they do not document her own public comment. Because the reporting given does not contain a primary source—an attribution to Dr. Ashton herself—there is no verifiable citation in this corpus showing she has publicly commented about these particular deepfake ads [1][2][9].
5. Conclusion, caveats and recommended next steps for confirmation
Based on the available reporting, it is accurate to say her likeness has been misused and that outlets and fact‑checkers have warned consumers [1][2][3], but it would be incorrect to assert, from these sources, that Dr. Jennifer Ashton has publicly commented on that misuse because no source provided contains such a statement; confirming whether she has spoken publicly requires a direct primary source—such as a verified social post, press release, or interview—that is not present in the supplied materials [1][2][9]. For definitive confirmation, seek Dr. Ashton’s verified social accounts, statements from her employer or representative, or mainstream outlets that quote her directly about these deepfake ads; absent that primary evidence, the responsible conclusion is: reported misuse exists, but a documented public comment from Dr. Ashton has not been provided in these sources [1][2][3].