Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

How do fact-checking websites like factually.co select the claims they investigate?

Checked on November 4, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

Fact‑checking outlets pick claims to investigate using a mix of reach, verifiability, potential harm, and organizational focus, rather than a single universal rule. Practices vary: some teams prioritize trending or influential claims and reader tips, others emphasize political significance or strict answerability standards; several formal guides (Gigafact, IFCN signatories) and platforms (PolitiFact, FactCheck.org, Washington Post Fact Checker) codify these criteria [1] [2] [3] [4].

1. Why a Claim Gets Noticed: Virality, Influence, and Harm — The Practical Gatekeepers

Fact‑checkers routinely prioritize statements that are trending, have high influence, or pose clear potential harm, because limited resources demand focusing where impact is greatest. Research reviewing 3,154 verification articles across 23 organizations found that monitoring social media and newsrooms for viral claims is common and that online rumors often require forensic methods, while public‑figure statements prompt expert arbitration [4]. Gigafact’s guidelines explicitly instruct partners to prioritize claims that are trending, unsupported, within a newsroom’s domain, and answerable with a clear yes/no using public sources, underscoring the role of verifiability and topicality in selection [2]. This operational triage—virality + verifiability + harm—explains why many fact checks target fast‑moving social posts, political ads, and widely shared videos rather than obscure technical errors.

2. Reader Tiplines and Editorial Sweeps — How Suggestions Meet Strategy

Many outlets mix active monitoring with audience tips: reader submissions regularly surface items editors would otherwise miss. PolitiFact and The Washington Post Fact Checker both report heavy reliance on reader suggestions alongside sweeps of news, political ads, and social platforms, while FactCheck.org focuses its limited bandwidth on major political actors [1]. Gigafact likewise encourages employing a tipline to gather claims from varied media; its model stresses sourcing claims across platforms to capture what is actually circulating [2]. This hybrid model—algorithmic or desk sweeps plus community signals—creates a feedback loop where public interest helps set priorities, but editorial judgment filters for significance and answerability before investigation proceeds.

3. Organizational Mission and Specialization Shape Choices — Not All Claims Are Equal

Different fact‑checking outlets bring distinct missions and beats that shape selections: some are explicitly political, others focus on public health or regional issues, and still others primarily offer pre‑publication checks. FactCheck.org concentrates on national political players and tends to target demonstrably false assertions, while PolitiFact and The Washington Post apply their own rating frameworks to measure truthfulness and importance [1]. Meanwhile, Factual centers on pre‑publication verification for nonfiction producers and vets fact‑checkers for subject expertise, which implies a selection bias toward claims appearing in editorial drafts rather than viral social posts [5] [6]. These institutional priorities mean the same circulating claim might be investigated by one outlet and ignored by another due to scope, expertise, and audience.

4. Standards and Codes That Force Consistency — IFCN and Gigafact Rules

Signatories to the International Fact‑Checking Network (IFCN) and networks like Gigafact provide formal standards that constrain selection methods, requiring published methodologies and nonpartisan approaches. The IFCN Code mandates signatories publish how they select and research claims and commit to equivalent treatment across political actors, reducing cherry‑picking risk [3]. Gigafact’s January 2025 guidelines advise focusing on claims that are answerable succinctly, tied to the publisher’s region or beat, and backed by transparent sourcing and correction practices [2]. These frameworks inject accountability and comparability into selection practices, though they still allow editorial discretion on what counts as “important” or “answerable.”

5. Tensions and Tradeoffs: Speed vs. Depth, Forensics vs. Reach

Fact‑checking teams balance speed and depth: viral rumors often require quick forensic analysis and image/video verification, while claims from public figures invite deeper expert consultation and direct outreach to claimants [4]. The comparative study shows organizations vary by country and platform in tactic mix—some emphasize social monitoring, others traditional reporting—reflecting resource and skill differences [4]. Gigafact’s preference for claims answerable in a short brief pushes toward concise, high‑impact checks, but that can exclude complex or technical falsehoods that resist a simple yes/no resolution [2]. These tradeoffs mean selection is partly a function of capacity: what an outlet can investigate well often determines what it chooses to tackle.

6. Bottom Line: Transparent Rules, Editorial Judgment, and Public Pressure

In practice, claim selection is a synthesis of transparent criteria—verifiability, influence, harm, and remit—and editorial judgment shaped by organizational mission and resource constraints. Networks and codes standardize expectations, while actual choices are driven by where a claim is circulating and whether it can be resolved with public evidence; reader tips and newsroom sweeps supply the raw material [1] [4] [2] [3]. Users should therefore view individual fact‑check portfolios as the product of both principled frameworks and pragmatic choices about what an outlet is best equipped to verify.

Want to dive deeper?
How does Factually.co prioritize which claims to fact-check in 2025?
What criteria do independent fact-checkers use to pick claims to investigate?
Do fact-checkers follow complaints or use social media trends to choose claims?
How do fact-checking organizations disclose their selection methodology?
What role do funding and partnerships play in a fact-checker's choice of claims?