Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: How does factually.co select the facts they choose to check in the 2024 election?
Executive Summary
Factually.co’s exact method for choosing 2024 election claims to check is not documented in the provided materials; the available analyses reference other fact-checkers’ and AI systems’ selection processes rather than a direct statement from Factually. The evidence shows common industry practices—AI-assisted claim detection, prioritization of statements from high-profile actors, and focusing on real-time or high-impact assertions—but no source here explicitly describes Factually.co’s internal selection rules [1] [2] [3] [4] [5] [6] [7].
1. What claimants and assertions the sources identify as worth checking — and why that matters
The analyses highlight that major fact‑checking organizations prioritize claims from high-profile political figures and moments—presidents, party leaders, debates, and official spokespeople—for visibility and potential public harm. Full Fact’s classifier filters claims by topic and speaker to surface “important” claims for checking, showing a focus on scale and relevancy rather than random sampling [1]. FactCheck.org explicitly prioritizes statements by top officials to reduce deception and confusion in U.S. politics, signaling that source prominence and potential public impact are standard selection criteria across organizations [3].
2. How AI tools are used to find and triage claims in practice
The materials indicate a growing reliance on AI to surface candidate claims for verification: Full Fact uses a BERT-based claim-type classifier fine-tuned on its data to filter and tag claims by topic and speaker, enabling keyword and speaker searches to prioritize items [1]. Factiverse’s real-time system detected and categorized over 1,100 statements in debates, showing AI can scale monitoring of live events though its selection heuristics are not fully disclosed [2]. These examples show AI as an amplifier, not an autonomous arbiter, with human editorial judgment still central to final selection.
3. Editorial focus and institutional priorities revealed by the comparisons
FactCheck.org’s described workflow—select, research, write, edit, correct—reflects a mission-driven editorial pipeline where selection aligns with organizational goals: reducing deception and focusing resources where public understanding is most at risk [3]. Ground News and Public Editor materials discuss credibility and bias assessments but do not map directly to Factually.co; nonetheless they underscore industry emphasis on transparency and editorial standards as selection determinants [5] [7]. Across sources, selection is framed as a mix of impact, timeliness, and accountability.
4. Where the supplied evidence is silent: the specific case of Factually.co
None of the provided documents state Factually.co’s explicit selection criteria for the 2024 election. Several analyses misattribute methods from Full Fact, Factiverse, and FactCheck.org to other actors, leaving a gap: no primary-source description of Factually.co’s processes is present in these materials [4]. The absence raises interpretation risks: drawing direct inferences about Factually.co from analogous organizations would rest on assumption rather than documented fact.
5. Reasonable inferences from analogous organizations — and their limits
Given common industry practices—AI-assisted monitoring, prioritizing statements by prominent officials, and concentrating on high-impact claims—it is reasonable to infer Factually.co may use similar tactics, especially during a high-stakes campaign period [1] [2] [3]. However, such inferences must be framed as probable patterns, not confirmed policy, because institutional details like threshold rules, human review workflows, and transparency commitments for Factually.co are not provided in the dataset [4] [6].
6. Contrasting viewpoints and potential agendas in the supplied sources
The sources reflect differing emphases: Full Fact and Factiverse foreground technological solutions and scale [1] [2], while FactCheck.org foregrounds editorial judgment and public‑service mission [3]. Ground News and Public Editor materials focus on credibility metrics rather than claim selection [5] [7]. These editorial angles reveal possible agendas: tech‑centric actors promote AI capabilities, whereas nonprofit fact‑checkers emphasize accountability and editorial transparency. Readers should view each description as serving institutional priorities rather than representing a neutral standard.
7. Bottom line and what to ask Factually.co next
The supplied evidence does not answer the user’s original question about Factually.co’s 2024 selection process directly; instead it outlines prevailing industry practices—AI triage, focus on high-profile speakers, and editorial mission-driven choices—useful as context but insufficient as proof [1] [2] [3]. To resolve the gap, request Factually.co’s public methodology or internal editorial guidelines and ask for specifics: use of AI models, criteria for “importance,” handling of real-time events, and transparency measures for selection decisions.