How have Substack and similar platforms been used to publish unvetted allegations in other high‑profile cases?

Checked on January 17, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Substack and similar direct-publishing platforms have been used to publish explosive, often unvetted allegations that spread quickly because the tools favor rapid posting and subscriber amplification rather than institutional vetting [1] [2]. That combination has produced a mixed record: some high-quality investigative work flourishes, while other posts—including unverified audio testimonies and fringe viewpoints—have circulated widely without the checks typical of legacy newsrooms [1] [3].

1. How the platform mechanics speed accusation over verification

Substack’s model gives individual writers immediate reach to paying subscribers and a public homepage, which compresses discovery and distribution timelines compared with traditional outlets that rely on editorial gates and legal vetting [1] [2]. The very features that help independent journalists—direct monetization and homepage visibility—also lower friction for posting sensational material, because there is no newsroom workflow that mandates corroboration or slow, adversarial legal review before publication [1].

2. Recent examples: unverified allegations that found oxygen on Substack

A prominent instance saw audio testimony attributed to Sasha (Sascha) Riley—making serious abuse allegations linked to the Epstein network and naming public figures—go viral after being circulated via Substack and social platforms while remaining unconfirmed by courts or mainstream investigations [3]. Separately, widely read Substack authors such as Alex Berenson have published contentious, high-impact claims to large subscriber bases, illustrating how single-author posts can become de facto public narratives without the same evidentiary presentation expected in indictments or vetted journalism [4] [3].

3. Editorial norms, moderation limits, and legal scaffolding

Substack provides some legal support to creators through programs like Substack Defender, which can review stories and cover certain legal fees, but its moderation policies have been characterized as “lightweight” and centered on a short list of banned conduct rather than rigorous pre-publication fact-checking [2]. Critics warn that this loose approach allows unvetted information and fringe ideologies to flourish on the platform, a trade-off Substack’s leadership has defended as protecting expression from heavy-handed censorship [1] [2].

4. The amplification problem: why unvetted claims stick

When a charged allegation appears on Substack, social sharing and aggregation into other newsletters or Threads-like feeds can rapidly recur it to new audiences, often stripping context about verification status and turning an allegation into de facto public knowledge; independent investigations have repeatedly flagged that dubious experts and unverified accounts have been widely quoted in national media when platform distribution outpaces verification [5] [1]. The result can be reputational harm to named individuals and erosion of public trust in both platforms and journalism when later vetting fails to catch up [5].

5. The other side: why defenders argue Substack matters

Supporters point out that Substack enabled independent, long-form investigation and let journalists pursue stories outside corporate constraints, producing reporting that legacy outlets might not fund or publish [1]. Platform proponents also argue that decentralization prevents gatekeepers from silencing unpopular but true claims, and that legal support programs are one concrete attempt to balance risk for writers and sources [1] [2]. These arguments expose differing priorities: speed and editorial freedom versus centralized safeguards and standards.

6. What this pattern means for readers, sources, and institutions

The lesson across reporting is straightforward: platforms that reduce editorial friction will surface both valuable investigations and risky, unvetted allegations, and readers, hosts, and downstream journalists must compensate by applying traditional verification practices to platform-originated claims—especially when those claims name powerful people or recount trauma [1] [3] [5]. This assessment is limited to the cited reporting; instances beyond the supplied sources cannot be adjudicated here.

Want to dive deeper?
How have legacy newsrooms changed sourcing practices in response to unvetted claims published on platforms like Substack?
What legal responsibilities do platforms have when paid newsletters amplify false allegations?
Which verified investigative stories originated on Substack and how were they vetted before publication?