Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: Where are the fact checkers getting their facts
Executive Summary
Fact-checkers obtain facts through systematic, documented methodologies that include claim selection, documentary research, expert consultation, and transparency about procedures; multiple independent audits find high agreement among major fact-checking organizations on matched claims. Critics point to differences in claim selection, framing, and priorities rather than widespread factual disagreement; standards and networks such as the International Fact-Checking Network (IFCN) and regional codes aim to increase consistency and transparency across the field [1] [2] [3] [4]. The empirical literature shows both convergence in verdicts and diversity in techniques, supporting the conclusion that fact-checkers largely share factual bases while varying in method and emphasis [3] [5] [6] [7].
1. Why the apparent disagreement? Look at selection, not truth
Apparent disputes about “where fact-checkers get their facts” often reflect differences in claim selection and scope rather than contradictory evidence. Studies comparing multiple organizations show that when the same claim is evaluated, verdicts agree overwhelmingly—one large comparison found agreement on nearly all matched claims after adjusting minor rating differences, indicating that fact-checkers rely on common evidence and standards when assessing identical statements [3]. However, organizations choose what to fact-check using different relevance filters—some prioritize high-profile political claims, others focus on viral social-media falsehoods or policy details—so public perception of disagreement arises because audiences are rarely comparing the same item across outlets. This selection variance explains why critiques about bias often point to omission or emphasis rather than factual error.
2. The toolkit: documents, databases, experts—and checklists that enforce rigor
Fact-checking teams compile facts through a constellation of documentary searches, archival sources, database queries, and expert interviews, guided by formal checklists and published principles. PolitiFact’s checklist and Truth-O-Meter principles emphasize asking for original evidence from sources, searching prior fact checks, using targeted databases and expert consultation, and documenting reasoning for ratings; these procedural components commit organizations to reproducible evidence chains [6] [2]. Comparative research catalogs 17 distinct debunking techniques—tracing origins, replacing false causal narratives, and providing background context—demonstrating that fact-checkers do more than offer verdicts; they construct evidentiary narratives that replace misinformation with documented alternatives [5]. These methods create multiple, cross-checkable touchpoints for arriving at conclusions.
3. External quality controls: networks, codes, and transparency requirements
Recent institutional developments have increased external oversight and harmonization by requiring signatories to adhere to codes and disclosure norms that formalize how facts are gathered and presented. The International Fact-Checking Network (IFCN) and the European Code of Standards set nonpartisan, transparency, and methodological criteria that participating organizations must document—legal structure, funding, published methodologies, and sample work—thereby making their evidence chains auditable to the public [4] [7] [8]. These frameworks do not eliminate differences in editorial judgment or claim selection, but they make trading on hidden procedures harder and allow researchers to compare practices systematically. Transparency statements and income disclosures published by network members provide additional accountability for how evidence is sourced and used [8].
4. Where genuine disputes remain: interpretation, emphasis, and political context
True conflicts among fact-checkers are most likely to arise around interpretation and emphasis rather than raw factual data. Studies find that organizations may adopt different debunking tactics or prioritize background context that leads to nuance in ratings—policy complexity, statistical interpretation, and causal claims are fertile ground for disagreement even when underlying data are shared [5] [3]. Political context magnifies these differences: a claim about climate change consensus produces broad agreement, while claims tied to complex fiscal estimates or politically charged narratives about national debt can produce divergent emphases on assumptions and modeling choices. These divergences reflect methodological judgments, not necessarily failures to “get facts.”
5. The bottom line for readers: how to judge fact-checks and the facts behind them
Readers should evaluate fact-checks by inspecting methods, sources, and prior disclosures rather than treating conflicting headlines as evidence of random error. Look for published methodologies and checklists, source citations, links to primary documents, and statements of funding or organizational affiliation; organizations that follow IFCN-style codes and publish transparency documentation allow third parties to verify how evidence was obtained and weighed [2] [4] [8]. When two fact-checks differ, the most informative step is to compare the underlying evidence cited and the interpretive steps—often the reason for divergence will be visible in the sources and the debunking technique used. That practice aligns with empirical findings that fact-checkers largely converge on facts when assessing the same claim, while differences arise from selection and interpretation [3] [6].