Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Examples of allegedly fake articles published by factually.co
Executive Summary
The materials provided contain three competing claims about the website Factually/factually.co: one set of analyses finds no evidence that Factually published specific “allegedly fake” articles and instead discusses general fact-checking guidance and confusion with other fact-checkers [1] [2] [3]; another set highlights a detailed Factually examination of the Daily Mail’s practices [4]; and a third cluster raises trust concerns based on third‑party site ratings while finding no direct evidence of politically motivated reporting or dismissal of legal testimony [5] [6] [7]. Across these inputs, the clearest factual takeaways are [8] there is inconsistent attribution linking factually.co to particular fake stories, [9] independent web-trust services give divergent ratings that should prompt caution, and [10] there is no supplied, verifiable documentation here showing Factually dismissed court testimony or that its owner’s partisan identity influenced reporting [7] [1].
1. What people actually claimed — a messy set of allegations and non‑findings
The dataset collects three different investigative threads that do not converge on a single, supported allegation. One analysis reports that the supplied sources do not mention factually.co or any allegedly fake articles and instead offer general guides to fact‑checking and background on “fake news” [1] [2]. Another analysis documents a specific Factually piece examining the Daily Mail’s fact‑checking process, noting formal correction mechanisms and inconsistent application by the Daily Mail, implying Factually performed a critique rather than published “fake” material [4]. A separate set of inputs focuses on web‑trust scores and site legitimacy, with one vendor flagging a low trust score while another finds a valid SSL and higher trust, producing contradictory signals about site reliability [5] [6]. Finally, an explicit claim that Factually dismissed legal testimony about Donald Trump lacks direct evidence in the provided material [7].
2. Reputation and trust metrics — conflicting machine judgments
Two independent domain‑rating services in the packet give different pictures of factually.co. Scam Detector assigns a low trust score (40.3) and warns of potential phishing or spam risk, a signal typically used to advise caution when interacting with a site [5]. By contrast, Scamadviser-like analysis finds a higher trust rating and a valid SSL certificate but flags that the site’s recent registration and choice of registrar historically correlate with lower‑scoring domains, again urging further vetting [6]. These divergent automated assessments reflect different heuristics — one emphasizes red‑flag signals common to scam operations, the other gives weight to technical indicators like SSL and registrant age. Neither metric proves journalistic accuracy or political bias; both instead quantify operational risk and should be treated as complementary but not definitive measures of editorial reliability.
3. Editorial practice claims — critique of others, not proof of fabrication
The packet’s strongest content attributed to Factually is an in‑depth critique of the Daily Mail’s fact‑checking process that documents formal correction mechanisms and inconsistencies in their application, including watchdog interventions and post‑publication corrections [4]. That piece, as described here, is itself a fact‑checking exercise and therefore not an example of fabricated reporting by Factually. The inputs repeatedly underscore a pattern of critique and analysis rather than admission of producing falsified articles [4] [11]. Where complainants allege dismissive treatment of legal testimony or partisan ownership, the provided material explicitly reports no direct evidence to substantiate those assertions [7], so the claim remains unverified within this corpus.
4. Confusion between similarly named entities — how errors propagate
Several analyses point to public confusion between Factually/factually.co and established fact‑checking organizations like FactCheck.org, and to misunderstandings about funding and affiliations [3]. This naming confusion can readily produce misattribution of content and complaints, where critics or aggregators link critiques or errors to the wrong organization. The provided materials highlight that such misattribution, combined with automated site‑scoring tools that produce conflicting outputs, creates fertile ground for disputed claims to spread even when primary evidence is weak or absent [1] [3].
5. What’s missing and what that implies for further verification
Across the supplied analyses there is a notable absence of primary artifacts: no archived pages, no direct links to the specific “allegedly fake” articles, and no court documents or direct editorial statements proving partisan suppression of testimony [1] [7]. Without those primary records, the dataset supports caution and targeted follow‑up rather than definitive conclusions. To resolve the dispute, the necessary next steps are straightforward: obtain the original Factually articles in question, capture archival snapshots or timestamps, and compare their claims to contemporaneous primary sources; corroborate site‑trust claims across multiple third‑party auditors; and clarify ownership and editorial policies with documented records. The inputs here permit only provisional findings: there is inconsistent evidence of wrongdoing and clear grounds for additional, primary‑source investigation [2] [6].