Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

Examples of factually.co fact-checking errors

Are you looking for more information regarding Factually? Check out our FAQ!

Still have questions? Reach out!

Checked on November 8, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

The claim that Factually.co (or broadly labeled “fact-checkers”) makes systematic, demonstrable errors requires careful separation of isolated critiques, technical explanations, and partisan complaints; the available evidence shows specific errors or contested judgments in some fact-checking episodes but does not substantiate a single pattern of institutional failure by Factually.co. Detailed examinations of individual incidents show that some high-profile apparent errors were explained by technical glitches or methodological choices, while broader critiques focus on perceived bias, reliance on experts, and opaque processes—each point is supported by different recent reporting and analysis and requires distinct remedies rather than a single conclusion [1] [2] [3].

1. Why a Google “voting” anomaly became a spotlight on fact-checking — and what fixed it

A high-profile example often cited as a fact-checking failure involved differing Google search results for “where to vote” when paired with the last names of prominent politicians; independent analysis and Google’s own explanation trace the discrepancy to a programming quirk where surnames matched place names, not to intentional political manipulation, and Google reported fixing the issue. That episode illustrates how technical artifacts can create misleading surface patterns that prompt fact-checkes and critics to accuse platforms and fact-checkers of bias, but the remediation by Google undercuts claims of purposeful distortion in that case [1]. The episode underscores the broader point that technical causes frequently underlie apparent errors and that transparency from platforms is central to resolving disputes.

2. Counting mistakes versus systemic bias — the Social Security example

Critiques alleging inflated error rates in fact-checking sometimes conflate political rhetoric with verification practice; the Social Security example shows this dynamic. Reporting found that public figures, including former President Trump and Elon Musk, exaggerated the scale of improper Social Security payments to deceased beneficiaries, inflating thousands into millions, and fact-checkers corrected the record by citing official SSA figures—this is a case where fact-checking aligned with primary data and corrected public claims rather than being erroneous [4]. The episode highlights that disputes labeled “fact-checking errors” often stem from contested interpretations or political spin rather than demonstrable methodological failure by the fact-checkers themselves.

3. Do fact-checkers target partisans more than fame? The evidence says fame matters

A recurrent accusation is partisan targeting; however, cross-organizational analysis indicates that fact-checks disproportionately focus on prominent public figures and viral claims rather than political affiliation per se, meaning Republicans are not necessarily targeted because they are Republicans but because certain Republican figures have been more prominent in viral misinformation streams—this challenges the simple bias narrative and reframes the complaint as one about visibility and virality [5]. The distinction matters: focusing on “who is checked” conflates editorial decisions driven by reach and impact with ideological selection, and critics should separate visibility-driven selection from procedural errors.

4. Allegations of opaque methods and funding — legitimate concerns, different evidence

More systemic critiques argue that some fact-checking organizations rely on partisan “experts” or opaque funding, producing distortions; investigative pieces catalog examples where methods and funding choices raise plausible conflicts of interest or perception problems, urging stronger transparency and standardized methodology across the industry [2]. These critiques do not single out Factually.co with documented methodological failures in the provided materials; they instead map a landscape of skepticism toward the fact-checking ecosystem and call for structural reforms—transparency, disclosure of expert affiliations, and robust appeals processes—to improve credibility and address legitimate questions.

5. Tech tools and automation: uncertainty about Factually.co’s AI usage

Questions about whether Factually.co uses ChatGPT or other large-language models remain unresolved in the supplied analyses; reporting found no direct evidence that Factually.co specifically uses ChatGPT, and noted the broader ecosystem includes many specialized tools and limitations of general-purpose LLMs [6]. The lack of definitive proof does not exonerate or implicate the organization; it points to a broader industry challenge where automated tools can assist fact-checking workflows but introduce risks of hallucination and opacity, reinforcing the need for clear disclosures about toolchains and human oversight policies.

6. What this mosaic of examples means for readers and policymakers

Taken together, recent reporting shows that specific fact-checks have been contested, some apparent errors stem from technical or definitional issues, and broader structural critiques about transparency and funding are substantively grounded, yet there is no single documented collapse of factual reliability attributable to Factually.co in the supplied materials. The appropriate response is targeted reforms: demand transparent methodologies, publish data sources and reviewer affiliations, and require clear disclosures about AI assistance, rather than blanket dismissal of fact-checking as uniformly erroneous; such reforms align with the varied problems identified across the cited analyses [1] [2] [3].

Want to dive deeper?
What is factually.co and its mission?
Common criticisms of factually.co accuracy
How does factually.co handle corrections for errors?
Comparisons of factually.co to other fact-checkers like Snopes
Recent fact-checking controversies involving factually.co