How do accuracy ratings vary between mainstream outlets (NYT, WaPo, WSJ) and broadcasters (CNN, Fox, BBC)?

Checked on January 16, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

The empirical record assembled here shows more data on public trust and perceived bias than on head‑to‑head “accuracy ratings” for specific outlets; analysts and surveys find broadcasters such as BBC, ABC and CBS often enjoy higher net trust than polarizing cable channels, while trust and usage patterns for CNN and Fox are highly partisan [1] [2]. Independent fact‑checking projects and network scorecards track pundit and claim accuracy on television, but the sources provided do not supply a definitive, comparable accuracy table for The New York Times, Washington Post, The Wall Street Journal versus CNN, Fox and the BBC [3].

1. What measurement tools actually exist for “accuracy” and what they cover

Fact‑checking initiatives and network scorecards—PolitiFact’s PunditFact being an example—measure the accuracy of on‑air claims and pundit statements, producing network‑level tallies that illuminate television performance more than print outlets’ story‑by‑story accuracy [3]. Surveys such as AllSides’ blind bias tests and public trust rankings capture perceived bias and trustworthiness rather than granular factual error rates, so they indicate reputation and perceived reliability rather than audited mistake counts [4]. Industry roundups and rankings (e.g., VisualCapitalist, Top10) compile surveys and reputational claims about trust and verification practices but do not replace systematic, comparable accuracy scoring across specific newspapers and broadcasters [1] [5].

2. What the data say about broadcasters vs. newspapers on trust and reputation

Broadcasters like BBC, ABC and CBS frequently rank higher on net trust metrics in surveys cited here, while CNN and Fox show more polarized trust profiles; for example, VisualCapitalist reports broadcasters ABC, BBC and CBS are more trusted than not for half of respondents, and Fox News is uniquely polarizing with equal shares trusting and distrusting it [1]. YouGov polling shows use and trust are strongly segmented by party—Fox usage and trust are concentrated among Republicans while Democrats show higher usage and trust in CNN and public broadcasters—pointing to audience composition as a major driver of perceived accuracy [2].

3. How mainstream print outlets fit into this picture

Reputational analyses and “best of” lists often describe The New York Times, Washington Post and The Wall Street Journal as investing heavily in verification and journalistic processes, but the provided sources frame that as reputation rather than as quantified error‑rate comparisons; Top10’s industry commentary emphasizes these outlets’ commitments to journalistic integrity without supplying direct accuracy ratings [5]. VisualCapitalist and YouGov situate newspapers differently by audience and trust, but again they report trust and readership rather than audited accuracy metrics that would allow a strict apples‑to‑apples accuracy ranking against broadcasters [1] [2].

4. Partisanship, perception and the “accuracy” gap

Perceived accuracy often follows partisan lines: blind‑survey methods (AllSides) show that ratings of bias change when brand signals are removed, implying that partisanship and brand identity strongly shape judgments about accuracy and fairness [4]. YouGov’s 2025 trust data illustrate this segmentation starkly: net trust in Fox and CNN differs by large margins across party groups, meaning what some audiences call “accurate” others call “biased” [2].

5. Methodological limits, incentives and hidden agendas

The available sources make clear that different projects answer different questions—fact‑check scorecards target claims; blind surveys test perceived bias; trust polls measure audience confidence—so any claim that one outlet is “more accurate” requires a specified metric and method [3] [4] [1]. Hidden incentives matter: broadcasters with opinion programming concentrate pundit errors that network scorecards capture, while legacy newspapers’ factual corrections are less visible in comparative datasets provided here, creating asymmetric evidence [3] [5].

6. Bottom line: what can responsibly be concluded

Based on the cited reporting, broadcasters such as BBC and public networks tend to score higher on net trust measures while CNN and Fox are more polarized; fact‑checking projects document systematic differences in on‑air claim accuracy for television punditry, but there is no single, cited dataset here that directly ranks NYT, WaPo and WSJ against CNN, Fox and BBC on a unified accuracy scale [1] [2] [3] [4] [5]. Any rigorous claim about “how accuracy ratings vary” requires explicit choice of metric—pundit claim accuracy, published corrections per article, or blind‑reader assessments—and the sources provided principally supply trust and bias measures rather than a universal accuracy scoreboard [3] [4].

Want to dive deeper?
How do fact‑checking organizations compare the error rates of televised pundits versus print reporters?
What methodologies do blind bias surveys like AllSides use and how do results change when brands are revealed?
Are there comprehensive datasets that track corrections and retractions across NYT, WaPo, WSJ, CNN, Fox and BBC over the last decade?