Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

How do historians and fact-checkers measure presidential lying and misinformation?

Checked on November 19, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive summary

Fact-checkers and historians measure presidential lying by counting and categorizing public claims, applying documented methodologies (like PolitiFact’s Truth-O-Meter) and compiling longitudinal tallies; specialist outlets such as PolitiFact, FactCheck.org and Reuters publish databases and lists used for comparative analysis [1] [2] [3]. Academic and journalistic studies supplement those counts with techniques like sampling, inter-rater reliability checks (including experiments with AI verifiers), and contextual analysis of repetition and persuasion effects [4] [5] [6].

1. How fact-checkers turn speech into data — the counting game

Fact-checking organizations convert public statements into measurable items by selecting claims that matter and are verifiable, then researching primary sources and expert literature before assigning a rating or verdict; PolitiFact’s Truth‑O‑Meter, for example, follows explicit principles and methodology to rate claims [4] [1]. Outlets such as FactCheck.org and Reuters maintain searchable collections of judged claims so journalists and researchers can count falsehoods and compare across time or actors [2] [3]. These counts are necessarily bounded by editorial selection: not every utterance is checked, and organizations prioritize claims deemed consequential, verifiable and novel [4].

2. Categories, scales and editorial judgments

Fact-checkers do more than a binary true/false: they use scales (e.g., “True” to “Pants on Fire”) and categories (false, misleading, unsupported, accurate) to capture nuance and degree of deception; PolitiFact’s published principles explain how those judgments are made and why transparency matters [4] [1]. The editorial choices about which claims to weigh and how to summarize context are consequential—methodology pages and defenses of the practice acknowledge fact‑checking is an imperfect but systematic search for truth [4] [7].

3. Historians: longitudinal narratives and context, not just tallies

Historians use fact-checkers’ databases as primary source material but overlay broader context—rhetorical strategies, institutional incentives and long-term patterns. For instance, scholarly commentary and encyclopedic treatments document patterns like repetition, the “big lie” or “firehose of falsehood” tactics and link those rhetorical strategies to political outcomes; the Wikipedia account and academic citations describe how repetition can increase misperceptions and how strategists have advocated mass, rapid messaging to overwhelm counterspeech [6]. Historians therefore treat counts as input, not final judgment: they ask why lying occurred, who enabled it, and what structural effects followed [6] [7].

4. Measurement challenges and critiques

Methodological limits are central: fact-checkers acknowledge selection bias (focusing on high-profile claims), measurement decisions (how to code partial truths) and the difficulty of judging intent versus error [4] [7]. Critics argue fact-checkers can be unevenly applied or perceived as partisan; defenders counter that systematic transparency and reproducible methods reduce bias and that higher rates of checking on one side often reflect differences in factual accuracy, not inherent bias [7] [8]. The debate over whether platforms or fact-checkers disproportionately flag right‑leaning misinformation is reported and discussed in the literature [7] [8].

5. New tools and triangulation: AI and inter‑rater reliability

Researchers and outlets are experimenting with automated or semi-automated checks and inter‑rater reliability tests. A Yale project asked five different AI models to verify frequent presidential claims as a test of methodological robustness—an example of triangulating human fact-checking with algorithmic support to spot systematic errors [5]. Such experiments aim to identify where human and machine checks converge or diverge, but available reporting shows these are still supplementary, not replacements for rigorous human sourcing [5].

6. What counts as “lying” — intent, repetition and political effects

Counting false statements is straightforward; attributing intentional deception is harder. Fact-checkers document factual inaccuracy and patterns of repetition; scholars then link repetition to persuasive effects like the illusory truth effect, where repeated falsehoods become more believable—work summarized in accounts of presidential misinformation and its political impact [6]. Thus measurement often separates factual verdict (was the claim true?) from interpretive claims about motive and effects (why and with what consequences?) [6] [7].

7. Practical uses and limits of the measures

Researchers, journalists and the public use these measures to compare leaders (for example, counting falsehoods across presidencies), to hold officials accountable, and to study downstream effects on public belief [6] [1]. But any quantitative comparison must reckon with differences in fact‑checking thresholds, media environments and the volume of public statements; historic comparisons rely on archived fact-checks and careful methodological framing to avoid misleading conclusions [1] [4].

Closing note: Available sources provide detailed descriptions of fact‑checking methods, databases and scholarly interpretations, but do not supply a single unified metric historians use; instead, practitioners combine fact‑check counts, methodological transparency, rhetorical analysis and new tools like AI to build a composite picture [4] [5] [6].

Want to dive deeper?
What methodologies do historians use to quantify false statements by presidents over time?
Which organizations maintain databases tracking presidential lies and how do they validate entries?
How do scholars distinguish between deliberate misinformation, honest mistakes, and political spin in presidential speech?
What sources and archival records are most reliable for verifying historical presidential claims?
How has the prevalence and detection of presidential misinformation changed with digital media and fact-checking since 2000?