What are the most common criticisms of factually.co and other fact-checking websites?
This fact-check may be outdated. Consider refreshing it to get the most current information.
Executive Summary
Fact‑checking sites like factually.co face repeated criticisms that cluster around perceived bias, technological limits, lack of transparency, and unequal global effectiveness. Recent analyses show concerns that both human and AI-driven fact‑checking can inherit political slants, struggle with non‑Western languages, and lose impact when social platforms withdraw support, while defenders emphasize methodological rigor and the necessity of adapting tools and outreach strategies [1] [2] [3]. This review synthesizes the main claims, the evidence behind them, and competing interpretations drawn from the sampled sources between September and November 2025.
1. Why Critics Say Fact‑Checkers Look Biased — Tone and Algorithmic Framing
A prominent criticism holds that fact‑checking can reflect political bias, not only through editorial choices but also via automated systems that treat euphemistic or pejorative phrasing differently. An arXiv study found that large language models (LLMs) can judge factually equivalent statements inconsistently when "X‑phemisms" alter tone, suggesting that automated or semi‑automated verdicts risk amplifying partisan slants unless explicitly mitigated [1]. This finding fuels claims that platforms relying on LLMs or opaque algorithmic workflows might deliver verdicts influenced by framing, provoking distrust among audiences who perceive asymmetrical rulings.
2. Transparency Worries: Methodology, Ratings, and Opaque Weighting
Observers point to opaque methodologies as a core problem: rating systems and aggregated bias scores often rest on third‑party metrics and proprietary formulas that the public cannot easily scrutinize. A review of media‑rating practices highlights that outlets are often scored at the publication level rather than per article, and monitors remain U.S.‑centric, which can obscure nuance and evolution in editorial lines [4]. This lack of granular, open documentation provides fodder for critics who argue that fact‑checkers selectively apply standards or that systemic biases are baked into unseen processes.
3. The Global South Problem: Tools Built for the West Leave Gaps
Fact‑checkers operating in non‑Western contexts confront structural tech limitations because many AI tools and language resources prioritize major Western languages. Reporting from November 2025 documents that generative AI aids fact‑checking in well‑resourced settings but is far less useful in underrepresented languages, where datasets, models, and platform prioritization are limited [3]. The implication is that international audiences receive uneven verification support, reinforcing critiques that fact‑checking institutions reproduce linguistic and geographic inequalities.
4. Platform Support Is Fragile — Impact and Reach Decline When Tech Firms Pull Back
Another line of critique concerns effectiveness: fact‑checks depend heavily on distribution from big social platforms, and when those platforms withdraw or deprioritize support, fact‑checking’s reach and corrective power drop dramatically. Professional fact‑checkers have warned of a "systemic assault" on the ecosystem as platform cooperation shifts, reducing the practical ability to counter disinformation even when verification work is rigorous [2]. This fuels arguments that fact‑checking alone cannot solve misinformation without sustained platform policy and funding commitments.
5. Defenders Point to Standards, Collaboration, and Creative Outreach
Proponents counter that many fact‑checking organizations maintain high standards, transparency commitments, and cross‑organisational collaboration to counter biases and scale impact. The same reportage that documented platform retreat also emphasized that professional teams are experimenting with new partnerships and creative strategies to extend reach and maintain fairness, arguing that methodological transparency and public engagement remain core defenses against accusations of unfairness [2]. Supporters frame critiques as reasons to improve, not to abandon, verification work.
6. AI Trade‑offs: Efficiency vs. Error and the Risk of Amplifying Slant
The adoption of AI tools introduces a trade‑off between speed and potential error or bias: automated assistance can flag more content but may inherit datasets’ political slants or fail in underrepresented languages, producing false positives and inconsistent rulings [1] [3]. Critics stress that without rigorous bias audits, human oversight, and localized training data, AI‑assisted fact‑checking risks undermining credibility. Advocates argue AI remains valuable if paired with transparency about model limits and continuous validation, but both sides agree on the need for documented mitigation.
7. What’s Missing From Many Critiques — Funding, Audience Segmentation, and Evolution
A recurring omission in public debate is attention to resourcing and audience dynamics: critiques often target methodology while underemphasizing how funding cuts, audience fragmentation, and evolving misinformation tactics shape outcomes. The sampled reporting notes platform funding shifts and regional tool gaps, implying that perceived failures often reflect broader ecosystem constraints rather than solely institutional malpractice [2] [3]. Addressing criticisms therefore requires combined fixes — clearer methodologies, independent audits, better multilingual tooling, and stable distribution partnerships — not only rhetorical rebuttals.
Conclusion: The literature from September–November 2025 converges on a mixed diagnosis: criticisms about bias, opacity, and unequal global reach are supported by empirical and newsroom reporting, while defenders point to standards and necessary adaptation. Reconciling these views requires transparent methods, independent audits of models, improved multilingual resources, and sustained platform support to restore both credibility and effectiveness [1] [4] [2] [3].