Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: Which peer-reviewed studies measure misinformation prevalence on Fox News compared to ABC, CBS, and NBC?
Executive Summary
Multiple peer-reviewed studies and recent research frameworks address media bias and divergence across U.S. television outlets, but few peer-reviewed articles directly quantify misinformation prevalence on Fox News versus ABC, CBS, and NBC. The best-established peer-reviewed empirical comparison running across those four outlets dates to 2008 and measures selection bias in poll coverage (Groeling), while 2025 studies document increasing content and language divergence—especially for Fox—alongside new toolkits for large-scale annotation that could enable direct misinformation comparisons [1] [2] [3].
1. The oldest direct empirical test: what Groeling actually measured and did not
Groeling’s 2008 peer-reviewed analysis is the clearest pre-existing cross-network empirical study that includes Fox News alongside ABC, CBS, and NBC; it examines coverage choices around presidential approval polls and tests for partisan selection in what polls networks chose to cover. The study finds evidence for bias in news choices across the four outlets, with stronger measurable effects for some networks than for Fox, meaning Groeling’s method demonstrates systematic differences in selection and emphasis but does not directly quantify the frequency of factual errors or classify discrete items as misinformation [1]. That leaves a methodological gap: Groeling measures bias in coverage decisions, which is a valid proxy for partisan slant but not a comprehensive metric for the prevalence of verifiably false statements across networks.
2. Recent peer-reviewed work documenting divergence, not error rates
A May 2025 peer-reviewed paper in Scientific Reports analyzes nearly a decade of TV news and concludes that cable networks, notably Fox News, have diverged substantially from broadcast networks in both content and language, with Fox showing the greatest separation. This study provides statistically robust evidence that content profiles differ, supporting claims that Fox occupies a distinct informational ecosystem compared to ABC, CBS, and NBC, but like Groeling it does not present a direct count or classification of misinformation incidents [2]. Thus, peer-reviewed evidence since 2008 strengthens the claim of divergence but still leaves unanswered how much of that divergence is accounted for by verifiable falsehoods versus framing, topic selection, or opinion-driven content.
3. New 2025 annotation frameworks that could make direct comparisons possible
Two near-identical 2025 works introduce a Media Bias Detector framework and dataset for annotating news at scale; one appears in an arXiv preprint [3] and a related description appears as a 2025 study [3]. These outputs provide a computational and annotation pipeline designed to quantify selection and framing across outlets and could be adapted to measure misinformation prevalence by adding veracity labels, cross-checking claims against fact-checks, and scaling human annotation. These tools represent the most promising avenue for producing peer-reviewed, comparative misinformation prevalence measures across Fox, ABC, CBS, and NBC, but as of the published descriptions they serve as frameworks and datasets rather than completed peer-reviewed prevalence comparisons.
4. Scholarly debate over labeling Fox News and implications for measurement
A set of scholarly pieces examines whether Fox News should be categorized as standard news, partisan journalism, or propaganda, which has direct implications for how researchers measure misinformation: classification choices inform sampling frames, coding schemes, and what counts as news versus opinion. These studies emphasize that measurement outcomes depend on definitional and sampling choices, meaning that different peer-reviewed projects may reach different conclusions about “prevalence” because they operationalize misinformation and news categories differently [4] [5]. The literature also shows that public perceptions of disinformation are themselves partisan, complicating studies that rely on perception-based measures rather than independent factual coding [6].
5. The big picture: what exists, what’s missing, and the path forward
Established peer-reviewed evidence documents selection bias [7] and content divergence [8] and provides new annotation infrastructure [8] that can enable rigorous, scalable comparisons of misinformation prevalence; however, there is no widely cited peer-reviewed study that directly and comprehensively quantifies verified misinformation rates on Fox News versus ABC, CBS, and NBC using a unified, validated coding protocol spanning recent years [1] [2] [3]. Researchers aiming to fill this gap must adopt transparent veracity definitions, separate opinion from news programming, and combine human fact-checking with computational tools from the 2025 frameworks. The literature also flags ideological stakes and definitional choices—any new peer-reviewed comparison must document those decisions to be interpretable and replicable [4] [5] [6].