Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Factually is actually incorrect 90% of the time
Executive Summary — The claim collapses: no evidence supports “factually is actually incorrect 90% of the time.” The assertion that factual claims are wrong nine times out of ten is unsupported by the supplied evidence and is logically unstable as stated; the available analyses show no empirical study or dataset that yields a 90% inaccuracy rate, and the closest large-scale accuracy surveys report substantially lower error rates in the range of about 50–60% for certain journalistic error measures [1] [2] [3]. The statement reads as a rhetorical exaggeration or paradox rather than an empirically demonstrated fact, and contemporary assessments of AI, journalism, and misinformation emphasize variability by domain, not a universal 90% error rate [2] [1] [4].
1. Why the 90% figure fails basic logical and evidentiary tests
The claim is self-referential and therefore unstable: if “factual” statements were wrong 90% of the time, the meta-claim itself would be extremely unlikely to survive truth‑testing. Analyses of definitions show that a “fact” is by definition something supported by evidence or reality, and discussions about “incorrect facts” tend to reflect linguistic looseness rather than measured error rates [5] [6]. No provided source supplies a study or dataset that produces a 90% inaccuracy metric for all factual claims, and the materials that examine shifting scientific consensus merely illustrate that knowledge evolves—some previously accepted “facts” were revised, not that facts are generally 90% false [7].
2. What empirical studies actually say about accuracy in media and information
Large, domain‑specific studies cited in the analyses show error rates far below 90% in their contexts: a major Poynter review found roughly 50–60% of sampled newspaper stories contained some type of error in cross‑national studies, while other media‑bias and fact‑checking reviews emphasize broad variation depending on outlet and topic [1] [3]. Researchers and practitioners measuring AI hallucinations or journalistic accuracy focus on contextual rates—topic, outlet, method of measurement—not a universal percentage— and a 2025 primer on AI accuracy stresses methodological complexity and that error rates vary widely by model, benchmark, and task [2].
3. The nearest real phenomena: evolving science and misinformation, not blanket falsehood
Selected sources show that specific fields and eras had collections of once‑accepted but now‑discarded claims—popular lists of “science facts” overturned since the 1990s illustrate scientific revision, not an ongoing 90% failure rate for all factual claims [7]. Government and fact‑checking organizations document the prevalence of misinformation in discrete events or narratives, for example DHS debunking campaigns, but these examples quantify discrete false narratives rather than asserting that nine out of ten facts are false across the board [8]. In short, reality contains pockets of high error and revision, but empirical work distinguishes those pockets from the whole of factual discourse.
4. Why people perceive high levels of falsehood: cognitive and media dynamics
Multiple analyses emphasize source selection, confirmation bias, and the uneven quality of outlets as drivers of perceived inaccuracy [4] [3]. When people sample low‑quality outlets or partisan silos, they encounter far more incorrect or misleading claims, making the world feel far less factual. This selective exposure effect can make a minority of false statements feel like a dominant reality, but empirical studies that do systematic sampling report lower error proportions than 90% and show error concentration in specific genres or outlets [1] [4].
5. Bottom line and how to assess similar claims going forward
The available analyses collectively show that the 90% claim is unsupported, logically problematic, and contradicted by domain‑specific empirical studies that report lower error rates and emphasize heterogeneity by topic and source [5] [1] [2]. When confronted with a sweeping numeric claim about accuracy, demand a cited methodology, sampling frame, and date; without that, treat the number as rhetorical. For journalistic accuracy, media‑bias, and AI‑accuracy metrics, rely on recent, transparent studies—examples in the supplied analyses dated 2014–2025 illustrate that measurable error exists but not at the universal 90% rate claimed [1] [2] [4].