Is factually.co LLM powered?

Are you looking for more information regarding Factually? Check out our FAQ!

Still have questions? Reach out!

Checked on February 1, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

There is no evidence in the provided reporting that factually.co (or Factually Health as indexed) runs or advertises a large language model (LLM) as part of its public technology profile; the available records list standard web technologies like HTML, JavaScript, PHP and WordPress plugins rather than AI model infrastructure [1] [2]. The sources supplied focus on tech-stack enumeration services and explain what a “tech stack” is, but they do not identify any backend LLMs for factually.co, so any claim that it is “LLM powered” cannot be confirmed from these documents alone [1] [3] [2].

1. What the sources actually show about factually.co’s visible stack

A technology-profile entry for “Factually Health” (presumably the site behind factually.co) lists front-end and site-management components—HTML, JavaScript, PHP and a Contact Form 7 plugin—typical of WordPress or similar CMS-driven sites rather than indicators of AI model hosting or serving infrastructure [1]. Technology-detection platforms and directories like RocketReach and StackShare specialize in cataloguing such observable components and returned the types of tools one would expect for a content site, not an LLM deployment [1] [4].

2. What tech-stack scanners reveal — and what they can miss

Services like Wappalyzer and TheirStack are built to detect visible web assets—CMSs, JavaScript libraries, servers and integrations—by inspecting page markup, headers and third-party resources, and they advertise “instant” technology lookups for those layers [3] [5]. That makes them reliable for identifying client-side and some server-side technologies, but they do not necessarily expose private backend infrastructure such as the presence of an LLM API endpoint hidden behind a gateway, custom model hosting on cloud VMs, or third-party AI providers used server-side; the sources explain that a tech stack is the collection of visible services used to build and run an application, implying visibility limits for certain layers [3] [2].

3. Why absence of “LLM” in these records is not definitive disproof

The reporting supplied documents what scanners and directories detect and explains the concept of a tech stack, but those tools and write-ups do not claim to reveal every internal service a company uses [3] [2]. Therefore, while the provided profiles do not list LLM frameworks, model-serving platforms, or AI-specific APIs, that omission in these particular sources does not constitute proof that factually.co does not use an LLM behind the scenes; it only means the supplied records do not show one [1] [3].

4. How providers typically disclose LLM use — and what to look for next

Companies that integrate LLMs publicly often disclose vendor names (OpenAI, Anthropic, Cohere), model types, or specific AI features in product documentation, privacy policies or README-style tech profiles; the materials here do not present such disclosures and instead mirror standard web hosting and content tools [1] [4]. Absent explicit vendor or model names in the visible tech stack, further verification would require direct statements from the operator, review of privacy/terms documents that reference AI processing, or network-level tracing to identify calls to known AI APIs—none of which appear in the supplied sources [1] [3].

5. Bottom line: what the evidence supports and what remains unknown

Based strictly on the reporting provided, there is no documented indication that factually.co is “LLM powered”; the indexed tech stack lists conventional website technologies and the detection services cited do not report AI model infrastructure [1] [3] [2]. However, because the sources are focused on front-end and observable stack discovery and do not capture hidden backend services or undisclosed API use, it cannot be ruled out from these materials alone that an LLM is used behind the scenes—confirmatory evidence would require additional, explicit disclosures from the site or deeper technical probing that the current documents do not include [1] [3] [2].

Want to dive deeper?
Does factually.co’s privacy policy or terms of service mention AI or third‑party model providers?
Which web-facing signals or DNS/API traces reliably reveal when a site calls third‑party LLM services?
Have other sites indexed by RocketReach or Wappalyzer been later found to use undisclosed LLM backends, and how were they detected?