Is factually.co created by ai answering instead of humans?
Executive summary
Available sources do not mention Factually.co or state who builds its answers; none of the supplied search results refer to that site specifically, so there is no direct evidence in the provided reporting that factually.co is “created by AI answering instead of humans” (not found in current reporting). The sources do establish that AI content and AI fact‑checking tools are widespread and imperfect: independent studies found generative chatbots make factual errors and altered quotes in a sizable share of cases (19% added factual errors; 13% altered or absent quotes in one review) [1].
1. No direct reporting on factually.co — gap in the record
The set of documents you supplied contains no article, press release, or dataset that mentions factually.co by name or details its staffing, authorship model, or editorial process; therefore claims about whether factually.co is produced by AI or humans cannot be verified from these sources (not found in current reporting). Any definitive statement beyond "not found" would exceed the evidence provided.
2. Context: AI writing and publishing tools are widely used
Multiple pieces in your search results discuss the rapid adoption of AI writing and fact‑checking tools across media and publishing, and the emergence of platforms that use AI to generate or check content. For example, Originality.ai markets automated fact‑checking and AI content tools for publishers, arguing AI has become integral to copyediting and fact verification workflows [2] [3]. This shows it is plausible for a modern content site to rely heavily on AI tools, but plausibility is not proof for a specific site [2] [3].
3. Independent studies show AI outputs can be unreliable
Research covered by DW and referenced studies found tangible limitations: one analysis found 19% of chatbot answers introduced new factual errors and 13% of quoted material was altered or missing, and a Tow Center review found provenance errors in 60% of cases for generative AI search tools [1]. Those documented failure modes mean a site that uses AI to answer could publish confident but incorrect items unless human editors intervene [1].
4. Vendor claims and self‑description are not neutral proof
Vendors and toolmakers (e.g., Originality.ai and other AI-tool platforms noted in your results) promote their products’ effectiveness in automating checks or content generation [2] [3]. Such claims illuminate capabilities and incentives — vendors want adoption — but they are marketing statements and do not substitute for independent verification of how any given publisher operates [2] [3].
5. What to look for when assessing who’s answering on a site
Because your provided reporting lacks direct information, use the patterns those sources highlight to investigate factually.co: check the site’s transparency page for editorial policies, author bylines and disclosures about AI use, third‑party audits or statements from independent fact‑checkers, or documented corrections that reveal human editorial intervention (not found in current reporting but recommended by the context in [2] and p1_s6). The industry trend toward hybrid tools (human + AI) is common among reputable outlets that want to avoid the documented hallucination problems [2] [3] [1].
6. Competing viewpoints and the hidden incentives
Sources present two competing narratives: toolmakers argue AI improves scale and fact‑checking rigor for publishers (Originality.ai’s positioning) [2] [3], while independent analyses warn AI assistants “cannot currently be relied upon” for accurate news and can misattribute or invent facts [1]. Hidden incentives matter: vendors have commercial reasons to overstate reliability; platforms that prioritize speed and traffic have incentives to lean on automation even when error rates remain material [2] [3] [1].
7. Practical takeaway for readers and researchers
Given the absence of direct reporting on factually.co in the supplied results, treat claims about its use of AI versus human authorship as unresolved in this record (not found in current reporting). Meanwhile, apply the broader lessons from the sources: if a site relies on AI, expect some risk of hallucination and altered sourcing unless it documents human oversight and correction mechanisms, because independent studies have identified significant error rates for contemporary generative systems [1] [2] [3].