What is factually? Is it an AI based system?

Are you looking for more information regarding Factually? Check out our FAQ!

Still have questions? Reach out!

Checked on January 9, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Factually is presented publicly as a consumer-facing verification service that offers an "AI fact-checker" accessible via messaging apps like WhatsApp and Telegram, with paid plans starting around €9/month [1]. There are similarly named products in health information and other AI-driven fact-checking tools—each making distinct technical claims—while independent guidance from academic libraries warns that AI-generated content is a composite of unidentifiable sources and requires lateral reading to verify [2] [3] [4].

1. What "Factually" claims to be and how it markets itself

Factually markets itself as "Your personal AI fact-checker" that verifies news, links, and quotes instantly through platforms such as WhatsApp and Telegram and advertises subscription plans from about €9 per month [1]. A separate brand, Factually Health, markets an "AI-Assistant" that it says pulls from "constantly updated, fact-checked datasets" using a proprietary method to deliver credible health information and offers different conversation modes for casual or augmented answers [2]. These are marketing claims about services and datasets made on the companies' websites [1] [2].

2. Is "Factually" an AI-based system — what the sources actually state

The public descriptions explicitly call these products AI-powered: Factually calls itself an "AI Verification Tool" and a "personal AI fact-checker" [1], and Factually Health calls its offering an "AI-Assistant" built on fact-checked datasets [2]. Other vendors in this space likewise describe internally trained or AI-enabled fact-checking systems—for example, Originality.ai says its Fact Checker is an internally built AI trained to provide factual answers [5], and Full Fact highlights AI-enabled software and a generative AI model used by human fact-checkers [6] [7]. Those statements establish that these firms position their offerings as AI-based [1] [2] [5] [6] [7].

3. What is not established by the reporting — technical details and limits

None of the provided sources supply independent verification of underlying model architectures, training corpora, or auditing procedures for Factually or Factually Health; the claims rest on company descriptions [1] [2]. Academic guidance from Texas A&M and University of Maryland libraries repeatedly cautions that "AI content has no identifiers and AI output is a composite of multiple unidentifiable sources," and recommends lateral reading to evaluate specific factual claims rather than trusting AI branding alone [3] [4]. That means the reporting does not establish whether Factually uses a large language model, a rules engine, curated databases, or a hybrid system—only that the vendors label their products as AI-driven [1] [2].

4. How accuracy and "factuality" are framed by experts and other vendors

Factuality—defined as how closely an AI's output aligns with established facts—is a recognized metric in the field, and industry and academic voices emphasize that generative models can hallucinate plausible-sounding but incorrect statements [8] [9]. Organizations such as Full Fact have built AI tools to surface misinformation for human fact-checkers and report that AI can help scale monitoring but still requires editorial oversight [6] [7]. Vendors like Originality.ai claim to reduce risk by training internal models and adding real-time context, but those are vendor claims about process rather than external proof of error rates [5].

5. How to read commercial "AI fact-checkers" in practice

Given the mix of marketing language and technical opacity, the correct factual conclusion from the available reporting is narrow: companies named Factually and Factually Health present themselves as AI-based fact-checking services [1] [2], and several organizations in the fact-checking ecosystem use AI tools to assist human reviewers [5] [6] [7]. What remains unproven by the supplied sources is the specific technical design, independent accuracy metrics, and the provenance of the datasets those systems rely on—issues academic guides explicitly flag as reasons to apply lateral reading to AI outputs [3] [4] [8].

Want to dive deeper?
What independent evaluations exist of the accuracy of consumer AI fact-checker services like Factually?
How do organizations like Full Fact integrate AI tools with human fact-checking workflows?
What methods do academics recommend for validating a claim produced by an AI fact-checker?