How have AI‑generated images and deepfakes been used in Epstein‑related misinformation campaigns?

Checked on February 6, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

AI‑generated images and deepfakes surfaced almost immediately after the U.S. Department of Justice released millions of pages of Jeffrey Epstein-related files, with manipulated photos circulated to falsely link politicians and public figures to Epstein [1] [2]. Researchers and fact‑checkers quickly identified telltale signs of synthetic media, including invisible watermarks like Google’s SynthID, but studies show such content nonetheless spreads widely and can persist in public belief despite warnings [3] [4] [5].

1. The catalyst: a document dump and an information vacuum

The Justice Department’s release of over three million documents, photos and videos related to the Epstein investigation created a surge of public interest and a fertile environment for mis- and disinformation, which social media users and accounts exploited by posting AI‑fabricated images that appeared to connect Epstein to prominent people [1] [2].

2. The techniques: quick AI fabrication and detectable traces

Disinformation researchers demonstrated that leading image generators can fabricate convincing photos of Epstein with world leaders "in seconds," and in several cases tools like Google’s Gemini left SynthID watermarks that fact‑checkers used to flag AI‑made imagery [3] [4]. Fact‑checking outlets and AFP analysts reported that many circulated photos bore strong indicators of being AI‑generated rather than authentic archival photographs [6] [7].

3. Who was targeted and what was posted

Targets included a cross‑section of figures: New York Mayor Zohran Mamdani (images purporting to show him as a child with Epstein and filmmaker Mira Nair), UK political figures such as Nigel Farage, and other unnamed high‑profile politicians who were falsely pictured with Epstein in social posts that went viral [1] [8] [9] [2].

4. Rapid amplification and partisan friction

Once created, AI images were rapidly shared across platforms and sometimes reposted by political groups before verification; for example, a Wrexham Labour Party account deleted an image linking Nigel Farage to Epstein after learning it was AI‑generated, prompting public rebuttals from Farage and criticism of the poster’s judgment [9]. The speed of sharing amplified harm and created partisan incentives to weaponize visual fabrications for political gain, an implicit agenda flagged by multiple outlets [1] [2].

5. Detection, labels and the limits of transparency

While tools and fact‑checkers can often detect synthetic origin—using metadata, watermarks, and AI‑detection techniques—scholarship warns that transparency alone is insufficient: experiments find people continue to rely on deepfake content even when told it’s fake, and policy debates about labeling and regulation remain active [3] [5] [10]. Policymakers and platforms have been pushed toward labeling rules and greater controls, but academic and regulatory sources emphasize both technical and social challenges [10] [11].

6. The broader pattern and ongoing risks

This episode fits a documented pattern in which generative AI supercharges "cheapfakes" and deepfakes to distort the visual record, feeding conspiracies and public confusion; researchers and disinformation watchdogs have repeatedly found that synthetic images are used to falsely associate public figures with salacious contexts around Epstein’s network [11] [7]. Although some synthetic content carries detectable markers, the combination of rapid spread, political utility, and human susceptibility to visual misinformation sustains the risk that such fabrications will influence public perceptions beyond the facts confirmed by released documents [5] [3].

7. What reporting does not (yet) tell us

Available reporting documents specific instances, detection markers and platform responses, but it does not provide a complete inventory of every manipulated image generated from the Epstein files nor definitive metrics on how sustained the belief effects are across different audiences; those gaps constrain firm conclusions about long‑term political impact and who orchestrated the most damaging campaigns [1] [4].

Want to dive deeper?
How do SynthID and other AI watermarks work and how reliable are they in court?
What legal and platform remedies have been proposed or enacted to curb political deepfakes since 2024?
Which fact‑checking organizations tracked Epstein‑related synthetic images and what methods did they use?