Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: Has Donald Trump ever been known to use AI-generated content in his campaigns?

Checked on October 21, 2025

Executive Summary

Donald Trump and his orbit have been associated with multiple instances of AI-generated content being posted or shared on his platforms between late 2024 and 2025, including portraits, doctored videos, and deleted posts that promoted conspiratorial themes; reporting shows both direct posts from his accounts and content produced by supporters, but the evidence does not uniformly prove centralized campaign-wide directives [1] [2] [3]. Analysts and outlets document a pattern of usage and amplification that includes deleted AI videos and follower-created imagery, while some partnership claims about AI infrastructure remain inconclusive about direct campaign content production [1] [4].

1. A pattern emerges: Deleted AI videos and repeated posts draw attention

Reporting in October 2025 documents at least two notable instances where AI-generated videos were posted to accounts associated with Trump and later deleted, including a video promoting a “Medbed” conspiracy theory and other synthetic clips portraying political figures or optimistic future scenes [1]. These incidents indicate active posting and rapid removal, suggesting either experimental use of generative media or post-publication backlash and moderation. The consistent documentation of deleted content across independent reports shows a recurring operational behavior: synthetic media is being used and then retracted, raising questions about origin, approval chains, and the role of platform norms in content lifecycle management [1].

2. Portraits and imagery: Team posts versus supporter creations

Multiple September and December 2025 pieces detail AI-generated portraits and fabricated photos of Trump, some posted by his team and others created by supporters to influence specific constituencies, such as images depicting Trump with Black voters [2] [3]. Coverage indicates a mixture of origin stories: some images appear to have been distributed by official channels, while others originated from grassroots or supporter networks. This split complicates attribution: the presence of supporter-created AI content shows a decentralized ecosystem that can benefit a campaign without formal campaign production, whereas team-posted portraits suggest at least tacit acceptance of AI aesthetics by official communicators [2] [3].

3. Platform of choice: Truth Social as a repeated node of dissemination

Reports identify Truth Social as a recurring platform where AI fakes surfaced, including a video and other manipulated clips shared on the account associated with Trump, providing a direct line from posting to wide visibility [5] [1]. The concentration of synthetic posts on a favored platform underscores the role of owned or allied social spaces in enabling rapid publication outside mainstream moderation flows. This pattern highlights operational advantages for actors wishing to deploy AI content—faster distribution and lower friction—while also exposing those platforms to critique for amplification of misinformation and manipulated media [5] [1].

4. Supporters’ role: Organized amplification without clear campaign sanction

December 2025 reporting shows supporters independently generating and spreading AI photos—for example, fake images of Trump with Black voters—to influence perceptions, with no conclusive public evidence that the official campaign commissioned those specific images [3]. This indicates a porous boundary between campaign communications and supporter activism: influence operations can flourish among backers and still affect candidate messaging and voter perceptions. The distinction matters legally and ethically because it separates formal campaign actions from grassroots engagement, yet both contribute to public information environments and can be strategically beneficial to a candidacy even without direct campaign oversight [3].

5. Partnerships and capacity claims: xAI ties raise but do not answer questions

Reports note a partnership between the Trump administration and Elon Musk’s xAI that creates potential pathways for AI model access, but available analyses do not document explicit use of xAI systems to generate campaign content, leaving the practical implications ambiguous [4]. Partnerships create infrastructure and potential capability; they do not automatically prove content production or editorial control. Coverage points to plausible future risks and capabilities but stops short of evidence that xAI output was used in the documented instances of AI-generated posts tied to Trump accounts, so the partnership context remains an important piece of background rather than conclusive proof of usage [4].

6. Attribution challenges: deletions and mixed provenance muddy the record

The sources show conflicting provenance and rapid deletions, making attribution difficult: some synthetic posts came from official accounts, others from supporters, and deletions obscure trails that investigators rely on [1] [2] [5]. This fragmentation complicates accountability: when content is pulled quickly, forensic recovery and chain-of-custody analyses become harder, reducing public ability to determine who commissioned or approved material. The mixed origin stories also mean observers must treat each case separately, assessing metadata, platform logs, and stakeholder statements rather than assuming a single coordinated source across all incidents [1] [5].

7. What’s missing and why it matters: transparency, timestamps, and internal records

None of the available analyses provide internal campaign memos, payment records, or explicit admissions proving a centralized campaign policy to use AI-generated content; reporting relies on public posts, deletions, and supporter activity, so key documentary evidence is absent [1] [3]. That absence matters because it leaves open alternative explanations: ad-hoc posting by staffers, volunteer-driven content, third-party vendors, or platform manipulation. For a definitive conclusion, investigators would need internal documentation, vendor invoices, or direct testimony linking the campaign’s decision-makers to specific synthetic content decisions—material not present in the cited coverage [1] [3].

8. Bottom line: pattern of use but limited proof of campaign-controlled production

Taken together, the sources establish a clear pattern of AI-generated materials appearing in and around Trump’s communications ecosystem, including official account postings and widespread supporter-created imagery, yet they fall short of delivering incontrovertible proof that every instance was centrally produced or approved by campaign leadership [2] [1] [3]. Observers should treat the evidence as indicative of active engagement with generative tools across a distributed network of actors and platforms rather than as singular proof of a unified campaign strategy; further investigative records would be required to confirm centralized commissioning or policy-driven use.

Want to dive deeper?
What role did AI-generated content play in the 2024 presidential election?
Has Donald Trump ever publicly commented on the use of AI in political campaigns?
How do fact-checking organizations identify AI-generated content in political advertising?
What are the potential risks of using AI-generated content in political campaigns?
Have any other prominent politicians used AI-generated content in their campaigns?