How do media-rating organizations (MBFC, AllSides, Ad Fontes) differ in methodology when assessing news outlets?
Executive summary
Three widely cited media-rating projects—Ad Fontes Media, AllSides, and Media Bias/Fact Check (MBFC)—share the goal of helping news consumers navigate partisan landscapes but diverge sharply in what they measure and how: Ad Fontes pairs a two-axis content-analysis model that scores both political bias and reliability using multi-analyst panels [1] [2], AllSides foregrounds left/center/right placement often without a formal accuracy score and uses a mix of crowdsourced, staff and third‑party inputs with stated confidence levels for each rating [3] [4], and MBFC produces separate bias and factual‑accuracy assessments using human evaluators who apply objective measures and subjective judgment per its methodology [5].
1. What “bias” and “reliability” mean to each project
Ad Fontes explicitly treats political bias and reliability as two distinct axes—placing outlets on a left‑to‑right horizontal scale and a vertical reliability scale—arguing that bias and factual rigor are separable qualities and scoring outlets across multiple categories of each [1] [2]. AllSides focuses primarily on political lean (left/center/right) in an interactive chart and, according to university guides, does not incorporate a formal accuracy or reliability metric into the chart itself; instead AllSides documents the methods used for each source and reports confidence in ratings depending on how the assessment was derived [3] [4]. MBFC states it assesses both bias and factual accuracy—using human evaluators to weigh objective and subjective measures—and presents both bias and “factual reporting” levels in its site structure [5].
2. How they generate ratings: panels, staff review, and algorithmic claims
Ad Fontes relies on trained, politically diverse analyst panels and multi-analyst content analysis to reduce individual rater bias, a process the organization describes as evolved from a single‑analyst origin into its current multi‑analyst system [1] [2]. AllSides combines methods that it openly rates by level—ranging from crowdsourced and independent research to editorial reviews and third‑party analyses—with staff sometimes doing preliminary assessments and additional sources informing ratings; AllSides says it indicates which method was used for each outlet and how confident it is in that rating [4]. MBFC depends on human evaluators using a mix of “objective measures and subjective analysis” to determine bias and factual reporting, per academic library summaries of its approach [5].
3. Outputs and how users see them
Ad Fontes markets a two‑dimensional Media Bias Chart that users and advertisers use to evaluate both bias and reliability; its chart aims to be a data‑driven content‑analysis product and is used for brand‑safety assessments as well as public consumption [1] [2]. AllSides provides an interactive bias chart emphasizing perspective diversity and includes methodological notes and confidence levels per source, but the chart itself does not rate accuracy—AllSides frames its work as identifying viewpoints so consumers can “think for yourself” [3] [5]. MBFC presents bias and factual‑accuracy labels and is described by academic guides as a resource that flags credibility and factual reporting levels across many small and large outlets [5].
4. Transparency, funding, and possible agendas
AllSides publishes the method used for each outlet and discloses revenue streams including memberships, donations, training, and advertising while signaling a plan to operate as a public benefit corporation—a move it describes as committing to a public mission alongside revenue generation [4]. Ad Fontes emphasizes methodological transparency and the intentional recruitment of a politically diverse analyst base, while also courting advertisers seeking third‑party rankings for brand safety; its founder and approach have faced scholarly scrutiny and public debate about reach and limits [1] [2]. MBFC’s model of human evaluators and combined objective/subjective criteria is documented in library guides but those summaries also imply users should evaluate whether they find the assessments rigorous and current [5] [3].
5. Criticisms, limits, and practical takeaways
Ad Fontes’ two‑axis model is respected for transparency and multi‑analyst ratings but has been characterized by some scholars and librarians as useful yet not definitive—“a simple guide” with debates over credibility and scope [2]. AllSides is praised for clear bias mapping and method disclosure but is explicitly noted by university guides as not measuring accuracy on its main chart and relying at times on lower‑rigor “independent research” assessments [3] [4]. MBFC signals its blend of objective and subjective evaluation but, as with the others, users are cautioned to consider changing media landscapes and review the underlying methodology before treating any single rating as authoritative [5] [3]. Together, the three tools are complementary rather than interchangeable: Ad Fontes for dual-axis content analysis, AllSides for perspective mapping and literacy, and MBFC for flagged credibility/factual‑accuracy labels—each useful when its scope and limits are understood [1] [4] [5].