Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: How did Bushart's meme violate terms of service on social media platforms?

Checked on October 12, 2025

Executive Summary

Authorities and multiple news reports say Larry Bushart’s meme crossed social-media terms of service by amounting to a threatening reference to a school shooting, which platforms removed or which contributed to his arrest under state law prohibiting threats against schools. Reporting is consistent that the post evoked a recent Perry High School shooting and was judged to create fear of mass violence, though coverage varies on motive, platform action details, and broader free‑speech implications [1] [2].

1. What reporters say the meme actually did—and why platforms objected

Contemporary accounts describe Bushart’s post as a picture of former President Donald Trump with the caption “Let’s get over it,” explicitly tied to a recent Perry High School shooting and shared in a public space where it alarmed residents. Journalists and investigators reported platforms and moderators treated the post as a potential threat of mass violence because it referenced the school incident in a manner authorities said would reasonably cause fear, meet the statutory thresholds for threatening conduct, and violate community safety provisions in typical social-media terms of service [1] [2].

2. How law enforcement framed the post—and why it mattered to platforms

Police filings and reports indicate investigators believed Bushart was aware his post would create hysteria and aimed to provoke community alarm, which is the factual framing that prompted criminal charges under Tennessee law making threats against schools a felony. Platforms routinely prohibit content that incites violence or credible threats; when authorities present content as a criminal threat, platforms commonly remove it or restrict the account to comply with safety policies and legal obligations. That alignment of law-enforcement assessment and platform safety rules explains the takedown and enforcement response [2] [1].

3. Differences and gaps across outlets on motive and context

Coverage diverges on motive and surrounding facts: some pieces emphasize the post’s explicit reference to the shooting and characterize it as intentional terrorizing of the community, while other reporting raises questions about context, such as the poster’s mindset and whether the language met legal definitions of a threat. Several outlets reported arrests and charges but offered limited detail about the post’s full text, prior warnings, or the poster’s account history—an omission that makes it harder to independently assess whether platform removal was proportionate or automatic [1] [2].

4. What mainstream terms-of-service categories the post likely violated

Based on the reporting, the meme likely ran afoul of standard community guidelines that ban threats, incitement to violence, and content that creates imminent safety risks. Those categories appear in most major platforms’ policies and apply whether a post is literal, symbolic, or intended to intimidate; platforms treat posts tied to real‑world violence as higher risk. The articles show platforms acted consistent with those categories, though none of the pieces provide platform removal notices or exact policy citations, leaving a gap in documentary proof of which clause was invoked [1].

5. Missing information and competing considerations reporters didn’t fully settle

Key omissions across accounts include the exact platform enforcement notices, any internal moderation rationale, whether the post was age‑gated or reported by users, and whether there were prior warnings or a pattern of posts. The debate over free speech boundaries and whether satire, hyperbole, or political expression was at play appears in related coverage about other controversial posts, but specific defenses or context from Bushart’s side are limited in available reports, making full evaluation of proportionality and intent difficult [3] [4].

6. Timeline and source reliability—what the dates tell us

All cited accounts are from late September 2025, with arrest and charging reports concentrated around September 22–23, 2025, suggesting rapid law-enforcement action and contemporaneous media attention. Multiple independent local and national outlets reported the arrest and description of the meme as threatening within days, which strengthens the factual claim that the post triggered both platform and legal responses quickly. Still, the rapid cycle increases the chance that subsequent clarifications, body‑camera evidence, or platform notices could modify the public record later [1] [2].

7. Bottom line: why platforms removed the post and what’s still unsettled

The available reporting shows platforms removed Bushart’s meme because it was judged to be a post that evoked a real‑world school shooting and functioned as a threat or incitement, contravening safety provisions in typical terms of service and prompting criminal investigation under state law. What remains unsettled is the full context of intent, the exact policy language used by platforms in their takedown, and whether the post might be defended as political hyperbole—details not documented in the reports reviewed here [1] [2] [3].

Sources: reporting summarized above with publication dates clustered September 22–24, 2025 [1] [2] [3].

Want to dive deeper?
What specific terms of service did Bushart's meme allegedly violate on each platform?
How do social media companies like Facebook and Twitter enforce their content policies?
Can memes like Bushart's be considered a form of protected free speech?
What role do algorithms play in detecting and removing violating content on social media?
Have there been any high-profile cases of meme creators facing legal action for terms of service violations?