Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: Why can't fact checkers give genuine facts?

Checked on October 5, 2025

Executive Summary

Fact-checkers face three interconnected pressures that explain why critics say they "can't give genuine facts": political bias and polarization, the rise of AI-generated falsehoods, and the evolving view among experts that traditional fact-checking alone is insufficient to stem disinformation. Recent studies and reporting from September 2025 show these pressures are empirical and compounding, not merely rhetorical [1] [2] [3].

1. Headlines Summarized: What critics actually claim and why it matters

Critics assert that fact-checkers fail to deliver “genuine facts” because their outputs appear selective, inconsistent, or influenced by politics; this claim surfaces in reporting that accuses fact-checkers of uneven scrutiny across political figures and topics. The accusation combines three discrete assertions: that fact-checkers are politically biased, that they miss or misidentify problems because of methodological limits, and that the information environment is contaminated by AI-produced fabrications that are difficult to detect. The items provided reflect these themes across opinion pieces and research reports from September 2025 [4] [2].

2. Hard research: LLM bias can skew fact-check outcomes

A dedicated study, PolBiX, documents how Large Language Models can exhibit political bias which compromises objective assessment in fact-checking tasks, specifically when judgmental language appears in prompts or content. This research indicates that automated tools or human reviewers using LLM assistance risk systematic tilt in verdicts due to model-internal patterns, not just human prejudice, meaning that some failures attributed to fact-checkers may stem from the tooling they rely on [1]. The finding complicates the narrative that missed facts are solely human error.

3. Real-world evidence: Fake sources undermine trust in checks

Investigative reporting found a government education plan that cited nonexistent sources, likely created by an AI, which materially undermined confidence in the document and the institutions that produced it. These cases show that fabricated citations and plausible-sounding but false claims are now prevalent enough to derail standard verification workflows, making the fact-checker’s job harder and slower. The incident in Newfoundland and Labrador from September 2025 is a concrete example of how AI-generated falsehoods create verification burdens for fact-checkers [2] [5].

4. The admitted limits of AI: hallucinations are not merely bugs

Researchers at OpenAI acknowledged that Large Language Models will always produce hallucinations due to fundamental mathematical constraints rather than only engineering flaws, which implies persistent, unavoidable false outputs from AI assistants. This scientific admission reframes expectations for any fact-checking system that uses LLMs: occasional or systematic generation of plausible falsehoods is an enduring risk requiring governance and new verification layers, not merely better training data or prompts [6]. That reality feeds skepticism about "genuine facts."

5. Experts warn fact-checking alone is no longer enough

At a global summit on disinformation in late September 2025, experts argued that reactive fact-checking must evolve into proactive information integrity strategies; the panel concluded that verification after falsehoods spread often arrives too late to reverse harm. This expert perspective frames the complaint — that fact-checkers “can’t” provide genuine facts — as partly a misdiagnosis: fact-checkers can verify facts, but the ecosystem requires prevention, rapid detection, and broader institutional changes alongside verification to preserve trustworthy facts [3].

6. Media controversies show perceived selective enforcement

Opinion pieces and critiques allege that some fact-checking outlets disproportionately target certain politicians, depicting selective scrutiny that fuels perceptions of inauthenticity. Coverage claiming fact-checkers "pounce" on specific figures while overlooking others taps into partisan narratives about media bias and can erode public trust even when individual checks are methodologically sound. These critiques amplify distrust and complicate fact-checkers’ ability to be seen as neutral even when their processes are transparent [4].

7. Proposed mitigations and their trade-offs

Solutions discussed in the sources include diversifying news perspectives, improving governance for AI use, and embedding verification earlier in publication pipelines, but each carries trade-offs: aggregators that promise neutrality may still fail to overcome audience filter bubbles, and strict AI mitigations could curtail useful services. Critics also warn that overly aggressive engineering fixes for hallucinations might disable beneficial AI functions, a trade-off summed up in commentary about proposed fixes for model hallucinations [7] [8]. These tensions explain why no single fix has resolved the problem.

8. Bottom line: Why the complaint is partly right — and partly misplaced

The claim that fact-checkers “can’t give genuine facts” reflects real, documented constraints: tooling bias, AI-generated fabrications, and ecosystem timing all impede clean, universally accepted verification [1] [2] [6]. However, the charge conflates systemic limits with individual malpractice; experts argue the remedy lies in multi-layered reforms—better AI governance, diversified sourcing, and proactive integrity measures—rather than blaming fact-checkers alone. The balance of evidence from September 2025 points to a complex problem requiring coordinated fixes, not a simple indictment. [3] [1] [5]

Want to dive deeper?
What are the most common biases in fact checking organizations?
How do fact checkers verify information from anonymous sources?
Can fact checkers be influenced by political or corporate interests?
What role do algorithms play in fact checking on social media platforms?
How can readers evaluate the credibility of fact checking websites?