Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Is factually.co powered by a LLM ?
Executive Summary
Factually.co’s use of large language models (LLMs) cannot be established as a settled fact from the provided materials: the available analyses show no direct, consistent evidence that factually.co is powered by an LLM, while a small set of related pages describe AI-powered products with unclear connections to the domain. The most concrete item in the dataset is a 2025 fact-check page that found no direct proof Factually.co runs ChatGPT or an LLM, but several peripheral sources describe AI or LLM use in adjacent projects and vendors that could indicate possible but unverified ties [1] [2] [3].
1. Why the central question remains unresolved and what the dataset actually says
The provided materials present a patchwork of findings and assertions that stop short of confirming whether Factually.co uses an LLM. Multiple analyses explicitly state no direct mention or evidence linking Factually.co to an LLM, creating a consistent negative finding across several items [4] [5] [6]. One source, a fact-checking page dated February 20, 2025, reached the same conclusion — it found no evidence Factually.co uses ChatGPT or another LLM [1]. Other items in the dataset reference LLM-based tools for fact-checking and AI-powered products in the same thematic space, but they do not name or document Factually.co as a user or operator of those models [2] [7].
2. What affirmative signals appear in adjacent or vendor materials
Several documents in the dataset describe AI-powered systems and LLM-enabled fact-checking projects which could plausibly be mistaken as evidence that Factually.co uses similar technology. For example, academic and open-source projects like FACT-GPT and Veracity are explicitly LLM-based and demonstrate standard industry practices for combining LLMs with retrieval agents for fact-checking [2] [7]. Empathy’s internal documentation claims integration of LLM-driven search to improve factual responses, and one analysis states that Empathy’s work indicates Factually.co has an association with LLM usage — though that link is asserted rather than documented with direct evidence [3]. These materials are relevant context but not direct proof.
3. Recent direct check: the 2025 fact-check page that finds no evidence
The most recent dated piece in the dataset is a fact-check titled “Fact Check: Factually.co uses chatgpt” published on February 20, 2025; its analysis concluded there is no direct evidence Factually.co is powered by ChatGPT or any other LLM [1]. That direct, recent check carries weight because it specifically targeted the claim and surfaced no corroborating technical disclosures, vendor contracts, or public statements from Factually.co to substantiate LLM use. The absence of confirmation in response to a targeted inquiry is a meaningful negative result and should be treated as the leading factual finding given its recency and focus.
4. Where ambiguity remains and what would resolve it
Ambiguity persists because several related entities and projects in the dataset are AI- or LLM-capable, creating plausible inference but not evidence. Corporate case studies, vendor pages for “Factually Health,” Troon Technologies’ client work, and open-source LLM fact-checking tools appear in the corpus but do not contain explicit, dated statements that Factually.co itself runs LLMs [6] [8] [3]. To resolve the question definitively, one would need a public technical disclosure from Factually.co, a statement from its engineering team, an audit of its APIs, or a verifiable procurement record indicating an LLM license or cloud LLM integration; none of these are present in the current dataset.
5. How to interpret possible agendas and the reliability of claims
Materials that promote LLM-based fact-checking systems tend to emphasize capability and innovation, which creates a promotional bias toward suggesting widespread LLM adoption; readers should treat such claims as agenda-prone without corroboration [2] [7]. Conversely, the 2025 fact-check page appears investigatory and concludes with a negative finding; that implies a more skeptical posture but does not prove nonuse in absolute terms [1]. The strongest inference from the dataset is that the claim “Factually.co is powered by an LLM” is unproven and presently unsupported by direct evidence, while the presence of nearby AI projects only signals industry context, not confirmation.
6. Bottom line and recommended next steps for verification
Based on the assembled analyses, treat the statement “Factually.co is powered by a LLM” as not verified: no document in the dataset provides a direct declaration, configuration detail, or procurement record to confirm LLM usage, and a targeted 2025 fact-check concluded the same [1]. To move from “unproven” to “verified,” request an explicit technical statement from Factually.co, seek a system architecture whitepaper or API disclosure, or obtain a vendor invoice/contract showing LLM provisioning; absent that, the correct public posture is to report the claim as unsupported by available evidence.