Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: What is the source of the AI generated video of Trump wearing a crown and flying a jet?

Checked on October 21, 2025

Executive Summary

The AI-generated video depicting Donald Trump wearing a crown and piloting a jet was widely circulated after Trump posted the clip on Truth Social, while independent attribution of the clip’s original creator points to an X user named Xerias_X who uploaded a more graphic variant; the piece uses Kenny Loggins’ “The Danger Zone,” which the artist has publicly requested be removed for unauthorized use [1] [2] [3]. Reporting differs on authorship details: some outlets emphasize Trump’s role in amplifying the deepfake, others identify an external X-account creator as the primary originator, producing conflicting narratives about source and intent [4] [5] [1].

1. Who Pushed the Video into Public View? A President’s Share Amplified the Clip

Multiple reports agree that Donald Trump amplified the AI-generated video by posting it on Truth Social, making his account the most prominent distribution point and driving widespread attention [1] [4] [5]. Coverage dated October 19–20, 2025 places the share squarely in the news cycle, with outlets noting the post’s visual motif—a crown and a jet marked “King Trump”—and the provocative soundtrack choice, which materially changed how quickly and broadly the content spread [1] [2]. The act of sharing by a sitting president turned the piece from a niche internet artifact into a national conversation, raising questions about responsibility and amplification in high-profile social media behavior [1].

2. Who Actually Created the Clip? An X Account Claims Credit

Investigations and reporting identify an X user called Xerias_X as the originator of a variant of the clip, which included more explicit imagery (described as “dumping a brown substance on protesters”) and predates Trump’s share, suggesting the creative work began outside Truth Social before being reposted by the president [2]. Some outlets, however, stop short of definitive attribution and frame Trump’s Truth Social post as the immediate source of broad circulation without naming the creator, reflecting either incomplete provenance or editorial caution [4] [5]. The differing emphases reflect real-world evidence gaps typical of rapidly circulating AI-generated media, where platform trails and reposts complicate clear chain-of-custody claims [4].

3. Artistic Theft or Fair Use? The Music Rights Dispute Raises Legal Questions

Kenny Loggins, the performer of “The Danger Zone,” publicly demanded removal of his song from the AI-generated video, stating the use was unauthorized and that he would have denied permission—thereby elevating the incident from a meme to a potential copyright dispute [3]. This development prompted media to frame the incident as not only a matter of political taste but also a legal contention over intellectual property and the unauthorized use of recorded music in AI-manipulated content [1]. The dispute underscores broader unsettled territory about rights enforcement when short-form AI media repurposes commercial recordings without clearance, and it spotlights musicians’ leverage when identities and catalog control are litigated in public [3].

4. Tone and Reaction: Outrage, Satire, and Partisan Lines

Reaction to the video split along predictably partisan and cultural lines: critics called the video juvenile and inappropriate for someone in power, while supporters framed it as satirical pushback against anti-Trump demonstrators, showing how the same artifact can be framed as either objectionable misconduct or political theater [6] [1]. Reporting on backlash emphasized ethical concerns about a leader promoting AI-manipulated violent imagery or mockery of protestors, with critics warning about normalization of dehumanizing content and supporters defending it as rhetorical escalation. The media narratives show how deepfakes become Rorschach tests for viewers’ preexisting political and normative stances [6].

5. Platform Trails and the Difficulty of Attribution in the AI Era

Journalistic accounts illustrate how platform provenance complicates who is called the “source.” Trump’s Truth Social post functioned as a primary distribution vector even when other users uploaded earlier versions on X; some outlets credit the president as the de facto source because of reach, while others point to the original X upload as the originator [1] [2] [4]. This divergence highlights a technical and normative distinction: the originator who created the media versus the amplifier who transformed it into mainstream news. Both roles matter: creators determine content intent; amplifiers determine impact and public salience [2].

6. What Reporting Omitted or Left Unclear—and Why It Matters

Coverage consistently reported the post, origin leads, and music dispute, yet key forensic details—such as the specific AI tools used, the full creation timeline, or outlet-level verification of Xerias_X’s authorship—remain underreported or absent, limiting definitive chain-of-custody conclusions [4] [5] [2]. These omissions affect accountability pathways: criminal or civil claims, platform moderation responses, and public understanding of machine-generated political content depend on transparent provenance. The lack of technical attribution is not merely a reporting gap; it shapes the legal and regulatory discourse around deepfakes and public officeholder conduct [1].

7. Big Picture: Precedent, Policy, and Next Steps

This incident adds to an emerging pattern of political figures deploying or sharing AI-generated media, raising persistent questions about copyright enforcement, platform policies, and political norms that multiple outlets trace in their reporting [1] [6]. The Loggins takedown request signals one immediate enforcement pathway, while debates about platform responsibility and potential legislative remedies remain broader, unresolved policy fronts. As journalists and courts parse provenance and intent, this case will likely be cited in future discussions about whether and how to regulate or sanction the political use of synthetic media [3] [1].

Want to dive deeper?
How does deepfake technology create realistic AI-generated videos?
What are the potential consequences of AI-generated videos in politics?
Can AI-generated videos like the Trump crown jet clip be used as evidence in court?
Which social media platforms have policies against AI-generated deepfake content?
How can viewers verify the authenticity of online videos, especially those featuring public figures?