How many people find factually useless?
Executive summary
The question "How many people find factually useless?" is ambiguous and underdetermined by available research: academic studies document that many people seek or encounter noninstrumental or overloaded information, but none of the provided sources give a clear, generalizable head‑count or percentage answering “how many” in those exact terms [1] [2] [3]. The literature points to measurable behaviors—pursuit of useless information, information overload, and exposure to misinformation—yet translating those behaviors into a single population number would overreach the supplied evidence [1] [3] [4].
1. What the question likely means and why it matters
Interpreting “find factually useless” can mean at least three things—people who seek information that is noninstrumental (useless for decision making), people who judge information they encounter as useless, or people who are overwhelmed by information to the point they disregard factual content—and the academic record treats these as distinct phenomena, each measured differently [2] [1] [3]. The distinction matters because interventions differ: reducing curiosity‑driven pursuit of noninstrumental facts is a different policy problem than addressing information overload that leads to avoidance, or combating exposure to misinformation that people may or may not evaluate as useful [1] [3] [4].
2. What the experimental literature shows about seeking “useless” information
Foundational social‑psychology experiments demonstrate that decision‑makers often pursue noninstrumental information—facts that, if known, would not change choices—and then proceed to use that information in decisions, so people systematically seek and misuse information that is, by design, noninstrumental [1] [2]. These are controlled lab findings showing the phenomenon exists across a variety of tasks and contexts, but those studies do not provide prevalence estimates for a whole population or workforce; they establish mechanism and replicability rather than census figures [1] [2].
3. What population studies and applied research add—and their limits
Work on information overload and the COVID‑19 “infodemic” documents widespread experiences of too much conflicting information and links that overload with behavioral effects—avoidance of health checks, reduced preventive actions, and higher sharing of false items on social platforms—indicating many people treat or experience factual material as effectively unusable under strain [3] [4] [5]. Still, these studies typically report associations, self‑reports of overload, or differential exposure by demographic group rather than a single statistic answering “how many people find factually useless,” so they illuminate scale and harm without converting it to a definitive headcount [3] [4].
4. Media and policy commentary on consumption choices
Commentary in outlets such as Forbes emphasizes that the root problem may be consumption habits and platform incentives—people choose or are nudged toward emotionally gratifying content—suggesting many people end up with information that is not instrumental because of marketplace dynamics, not just individual curiosity [6]. This view complements academic findings by highlighting economic and design incentives that make factual material less accessible or less prioritized, but it, too, does not supply a direct numeric answer [6].
5. Reconciling evidence and the honest bottom line
The honest answer, given available reporting, is that multiple streams of research confirm that a substantial portion of people repeatedly seek, encounter, or are overwhelmed by information that is noninstrumental, unusable, or treated as useless—but the sources do not provide a single, defensible number of people who “find factually useless” in the plain sense of the question [1] [3] [4] [2]. To convert the documented phenomena into a precise population percentage would require targeted survey work with a clear operational definition of “find factually useless”; none of the provided materials contain that specific measurement [3] [7].
6. Where the evidence points for deeper inquiry and policy
Given the pattern across laboratory experiments, overload research, and platform analyses, the practical takeaway is that the phenomenon is widespread and consequential—interventions should focus on reducing information overload, improving media design and incentives, and studying the prevalence question directly with representative instruments that define what “finding factual information useless” actually means [1] [3] [6]. Future research needs standardized measures linking subjective judgments (“this is useless”) to objective behaviors (ignoring, sharing, or misusing information) to produce the kind of population estimate the original question seeks [7] [8].