What are the methodological differences between The Washington Post, PolitiFact and AP fact checks of Donald Trump?

Checked on February 6, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Three leading fact‑checking operations take different paths to the same destination: The Washington Post’s Fact Checker compiles an exhaustive, ongoing database and applies a qualitative “pinning” rubric; PolitiFact uses a scripted Truth‑O‑Meter with discrete ratings and trackers like the Trump‑O‑Meter; the Associated Press’s fact checks are less well documented in the provided reporting, but other studies show independent organizations differ on sampling, scope and presentation—differences that shape what claims get checked and how readers interpret the results [1] [2] [3].

1. How each shop selects and scales claims: scope versus sample

The Washington Post built a massive corpus—its database of Trump statements includes tens of thousands of entries (30,573 for one period), reflecting an effort to record and label a very large sample of utterances rather than a narrow set of high‑profile claims [1] [3], while PolitiFact conducts targeted, reporter‑driven checks and runs ongoing trackers (the Truth‑O‑Meter and Trump‑O‑Meter) that curate and rate statements with a focus on claims of public significance [4] [5]. Academic cross‑checks warn that those different sampling strategies—Post’s broad cataloguing versus PolitiFact’s selective rating—produce different impressionistic scales of how often a speaker lies, and that analysts must account for sampling and scaling when comparing totals or percentages across outlets [1] [6].

2. The rating frameworks and how they communicate verdicts

PolitiFact expresses findings through an ordinal Truth‑O‑Meter (True to Pants on Fire) that aims for transparent criteria and repeatability; its staff emphasizes independence, transparency and clear writing as core principles and has published guidance on the Truth‑O‑Meter’s logic [7] [8]. The Washington Post’s Fact Checker uses a separate rating language—often “Pinocchios” historically and a qualitative standard described by its editor as a “reasonable man” test for weighing intent, context and repetition—and supplements verdicts with the Post’s larger database to show patterns of repetition over time [9] [1]. Comparative research finds high agreement on many verdicts but notes differences in how granular or categorical each outlet’s scale is, which changes how identical claims can look when side‑by‑side [10] [1].

3. Research process, sourcing and editorial style

PolitiFact stresses contacting claimants, documenting sources and grounding pieces in public records and data, a practice reflected in its public methodology and repeated reporting milestones [7] [11]. The Post’s Fact Checker similarly interviews primary sources and documents but has acknowledged using anonymous sources in some fact checks and leans more on long‑form synthesis and accumulated databases to assess repetition and intent [9] [3]. Peer reviewers and methodology studies observe that these editorial choices—how often a checker interviews the speaker, whether it uses anonymous sourcing, and whether it aggregates repeated claims—affect readers’ sense of thoroughness and impartiality [1] [10].

4. Agreement, disagreement and perception of bias

Empirical work shows substantial agreement across major fact‑checkers on matched claims, with only occasional minor rating differences after adjustment, but it also documents systematic differences in which claims each outlet selects for review and how those claims are scaled into totals or percentages [10] [1]. PolitiFact’s own counts show a high proportion of Trump checks landing on the false side—about three‑quarters in sampled analyses—which reflects both editorial emphasis and the Truth‑O‑Meter’s categorical thresholds [8] [2]. Observers and tools like AllSides warn that fact‑checking organizations can still display bias through selection and framing decisions, so transparency about method and scope remains crucial for readers [12].

5. What the coverage gaps mean and where reporting is thin

The supplied sources document Post and PolitiFact methodologies and comparative studies in some depth [1] [10] [9], but the reporting in this packet does not provide a full, primary description of the Associated Press’s internal fact‑checking methodology for Trump specifically, so any definitive statement about AP’s exact rating rubric or selection rules cannot be supported from these documents alone [3]. That gap matters: methodological differences—what a shop counts, how it rates, whether it aggregates repeated claims—drive public perceptions of a politician’s truthfulness, and without comparable transparency across all three organizations, numeric comparisons will be misleading [1] [6].

Want to dive deeper?
How do the Truth‑O‑Meter and Post Pinocchio scales map to each other in practice?
What academic studies have measured agreement and disagreement across major fact‑checking organizations?
How do fact‑checker selection biases affect public perception of political dishonesty?