Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
What criteria does Factually use to evaluate news sources?
Executive Summary
Factually evaluates news sources using a mix of bias and factuality measures, drawing on public trust metrics, media‑watchdog ratings, and content analysis techniques; however, the exact published criteria vary across summaries and often mirror methodologies used by organizations like Media Bias/Fact Check and Ad Fontes Media [1] [2] [3]. Multiple analyses agree Factually emphasizes bias scoring and factual reporting—including failed fact checks, sourcing, transparency, and story selection—but descriptions differ on weights, specific subcriteria, and whether community blind surveys or trained reviewers are primary inputs [1] [4] [3]. The evidence shows a composite approach rather than a single simple checklist, and the descriptions reveal both convergence on core dimensions and divergence in operational detail and emphasis [5] [6].
1. What claimants say Factually measures — a concise inventory that reads like a checklist
Analyses assert Factually combines bias metrics and factual reporting metrics into its evaluations, listing subcriteria such as economic and social ideology scoring, straight‑news versus editorial balance, failed fact checks, sourcing quality, transparency, and one‑sidedness or omission; one explicit breakdown assigns weights (35/35/15/15 for bias components and 40/25/25/10 for factuality components) that place outlets on a left‑right spectrum and on a Very High-to‑Very Low factuality scale [1]. Other summaries emphasize related indicators used across watchdogs—headline wording, choice of stories, author credentials, and presence of independent fact‑checking—suggesting Factually’s inputs mirror common industry standards [6] [7]. Multiple analyses therefore converge on a two‑part framework: political slant and information reliability as primary axes [4] [5].
2. How Factually reportedly implements methodology — who rates what, and how
One account presents Factually’s methodology as systematic and weighted, implying numerical scoring and transparent combination rules to produce bias placement and factuality labels; this version frames the process as replicable and quantitative, with explicit percentage weights and scaled scores [1]. Alternative analyses highlight softer, qualitative techniques such as trained reviewers performing content analysis, blind surveys, community feedback, and audits to assess perceived slant and baseline reliability, indicating a mixed quantitative–qualitative workflow rather than purely algorithmic scoring [3] [8]. The differing descriptions show an operative tension: Factually is described both as following an MBFC‑style weighted rubric and as incorporating multi‑rater, human‑review processes common to Ad Fontes and other checkers [1] [2].
3. Where the sources align and where they diverge — agreement on substance, disagreement on detail
All analyses agree Factually evaluates bias and factual reliability and that it uses recognized indicators—trust surveys, fact‑check records, sourcing quality, and editorial choices—to form judgments; this consensus anchors the core claim [4] [5] [6]. Disagreement arises over exact weights, the primacy of different inputs, and whether external watchdog ratings (e.g., AllSides, MBFC, Ad Fontes) are incorporated or simply echoed; one source gives a precise weighting scheme [1] while others report more general composite approaches or note missing considerations such as ownership structures or error‑rate measurements [5] [8]. The variations matter: a rigid weighted rubric produces reproducible scores, while multi‑reviewer, survey‑driven systems prioritize contextual judgment and inter‑rater calibration [1] [3].
4. Important omissions and potential agendas to consider when reading these summaries
The analyses commonly omit comprehensive discussion of source selection, update cadence, and transparency about corrections or appeals, leaving open whether Factually publishes raw data, inter‑rater reliability statistics, or revision histories [9] [8]. Several summaries reflect the methodological perspectives of specific watchdogs—Media Bias/Fact Check’s weighted rubric and Ad Fontes’ multi‑analyst content ratings—so readers should note possible agenda alignment when descriptions mirror those organizations’ frameworks rather than a neutral third‑party account [1] [2]. The absence of explicit mention of ownership, funding disclosures, or independent audit mechanisms is consequential because those factors affect trust and reproducibility even when content metrics are robust [5] [3].
5. Bottom line and practical takeaways for users evaluating Factually’s claims
Taken together, the evidence shows Factually uses a composite methodology focused on political bias and factual accuracy, informed by both numeric scoring approaches and human content analysis, but public descriptions vary in granularity and emphasis across summaries [1] [3]. Users should treat specific numeric weights or precise labels as conditional on which explanatory account they consult, and look for published methodology documents or data releases to confirm operational details; absent those, weigh Factually’s ratings alongside other independent evaluations—trust surveys, AllSides, MBFC, and Ad Fontes—to triangulate reliability [6] [2]. This composite strategy provides broader coverage but requires transparency to ensure reproducibility and to expose any organizational biases in the evaluation process [4] [5].