How does Media Bias/Fact Check evaluate fact‑checking sites and what are common criticisms of its ratings?

Checked on January 14, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Media Bias/Fact Check (MBFC) evaluates fact‑checking sites using the same multi‑dimensional methodology it applies to news outlets—scoring “factual reporting” and “political bias” across defined categories such as wording/headlines, sourcing, story selection, and political affiliation—and requires a minimum sample of content for any evaluation (10 headlines and 5 full stories) [1] [2]. Its ratings are widely used and in some academic comparisons correlate strongly with other evaluators, but scholars, librarians and critics have raised methodological and transparency concerns about subjectivity, sample limits, and label granularity [3] [4] [5].

1. How MBFC measures fact‑checkers: categories and thresholds

MBFC places fact‑checking organizations on two axes—factual reporting (a seven‑point scale from “Very High” to “Very Low”) and political bias—using four main categories: use of wording and headlines, fact‑checking and sourcing, choice of stories, and political affiliation, plus subcategories like bias by omission and loaded language; the methodology requires reviewing a minimum dataset (10 headlines and 5 full stories) per source and counts only failed fact checks or confirmed misinformation within the past five years against a source [2] [3].

2. What MBFC says about impartiality and its internal process

MBFC presents itself as independent and “100% human‑driven,” emphasizing a structured, evidence‑based approach and claiming it rates bias across the political spectrum, citing founder Dave M. Van Zandt’s applied scientific background and iterative methodology updates, including a reworked system introduced in 2025 aimed at systematic ideological and factual reliability scoring [6] [1].

3. Evidence of external validation and where MBFC performs well

Academic and comparative work has found that MBFC’s factualness ratings often align with other ground truth datasets and services: studies report high agreement with a 2017 independent fact‑checking dataset, strong correlations with NewsGuard (r ≈ 0.81) and concordance with journalists’ assessments in some comparisons, and MBFC’s dataset is noted for its breadth—covering thousands of sources—which can make it useful as a research tool or starting point for lateral reading [3] [2] [7] [5].

4. Common methodological criticisms and their implications

Critics point to unavoidable subjectivity where MBFC uses qualitative judgments—tone, headlines, story choice—and to potential sampling limitations: minimum-review thresholds can miss systemic behaviors outside the sampled items, and restricting penalization of misinformation to a five‑year window may undercount longer histories; academic guides and library resources warn that credibility scores, including MBFC’s, should be used cautiously and alongside other evaluators because bias judgments can vary by method and coder [3] [2] [5] [4].

5. Disputes over transparency, labels and perceived agendas

Some observers and institutional guides note that MBFC’s mix of “objective measures and subjective analysis” and its many label categories give it granularity but also fuel disputes over consistency and transparency; MBFC insists it is non‑partisan and that its methodology targets both left and right outlets, while detractors argue that without fully public coding logs and clearer inter‑rater reliability statistics, contested ratings—especially for fact‑checking outlets that claim neutrality—can appear politically motivated [1] [6] [3].

6. Practical takeaways for readers and researchers

MBFC is a prominent and practical starting point for assessing fact‑checking organizations because of its large dataset and visible methodology, but its ratings should be triangulated with other resources (NewsGuard, AllSides, academic studies, library guides) and direct lateral reading of the fact‑checker’s own methods and recent outputs, given the documented tensions between useful broad coverage and the limits of qualitative scoring systems [7] [8] [5].

Want to dive deeper?
How do NewsGuard and AllSides differ from Media Bias/Fact Check in rating fact‑checking organizations?
What academic studies have tested the inter‑rater reliability of media bias rating sites like MBFC and NewsGuard?
How should librarians and educators teach students to use media credibility tools like MBFC responsibly?