Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: Why won't factually disclose what ai platform they use?

Checked on October 6, 2025

Executive Summary — Clear answers are scarce, but the pattern is evident: many high-profile organizations decline to publicly name the specific AI platforms or models they run because competing priorities—commercial secrecy, legal uncertainty, and operational security—often outweigh transparency pressures. Recent reporting shows this tension playing out across newsrooms, Big Tech, and AI developers between September and October 2025, with regulators pressing for disclosure even as companies cite strategic and safety reasons for limited transparency [1] [2] [3].

1. Why newsrooms keep their AI tools behind the curtain — the newsroom’s strategic playbook! News organizations like The New York Times use AI to process massive datasets for reporting but do not publicly disclose the exact platforms they use, reflecting a choice to prioritize editorial control and competitive advantage over full public disclosure. Reporting from September 22, 2025, describes an editorial AI initiative led by Zach Seward where internal teams collaborate with reporters and leverage AI for investigations, yet the article explicitly notes the newsroom’s reticence to name the underlying vendor or model, suggesting a calculated operational opacity rather than simple oversight [1].

2. Regulators are asking for more detail — the EU’s push collides with corporate caution! The EU’s AI Act requires developers of general-purpose AI models to publish summaries of their training data, a regulatory push that started to bite in September 2025; however, coverage shows ambiguity about compliance timelines and company responses. Reporting from September 12, 2025, specifically questioned whether OpenAI had met the requirement for its latest model and found the company had not publicly clarified compliance status, highlighting a gap between regulatory intent and real-world disclosure practices [2]. This regulatory pressure raises the stakes but has not yet forced universal transparency.

3. Companies balance disclosure against competitive advantage — money and market dynamics matter! Large technology firms are simultaneously investing heavily in AI infrastructure while pursuing strategic acquisitions and partnerships that may incentivize secrecy. For example, Meta’s purchase of a significant stake in Scale AI for nearly $15 billion, reported October 6, 2025, illustrates how financial stakes and proprietary capabilities encourage firms to keep implementation details private to protect market position and intellectual property, even as the public calls for clarity about which models handle user data [4].

4. Privacy and product design offer a public-facing rationale for silence — Apple’s different path shows alternatives! Some firms frame non-disclosure as a privacy or safety measure rather than pure secrecy. Reporting on Apple’s Veritas project from September 29, 2025, emphasizes that Apple is designing a privacy-first voice assistant and may intentionally limit disclosures about back-end model specifics to protect user data and competitive differentiation. This suggests companies sometimes justify limited transparency by citing user privacy and product security; the result is selective rather than blanket openness about AI stacks [5].

5. OpenAI and model releases illustrate mixed signaling — performance claims, not always transparency! OpenAI’s October 6, 2025, announcement of the o3-pro model underscores a focus on performance for high-stakes tasks, yet coverage shows technical upgrades do not automatically translate into disclosures about data provenance or full compliance narratives. The reporting indicates a pattern where firms announce capabilities but remain circumspect about training data specifics or platform provenance, leaving regulators, journalists, and users to press for follow-up details [3].

6. Advocates demand a middle ground — transparency without exposing trade secrets! Privacy and policy reporting from September 2025 argues for measured transparency: companies should disclose whether user data is used for training and provide summaries of training data types, without necessarily revealing proprietary model architectures or internal prompts. Coverage framing in mid-September emphasizes the need for balanced disclosure to build trust—for instance, outlining which AI assistants do or do not train on user inputs—while acknowledging firms’ arguments about protecting sensitive implementation details [6] [7].

7. What this means for users and policymakers — the operational implications are immediate! The recent reporting between September and early October 2025 paints a clear operational challenge: regulators are tightening requirements and the public wants clarity, but firms have powerful incentives to limit disclosure for competitive, security, and privacy reasons. The evidence across these pieces shows no single cause explains nondisclosure; instead, a mix of regulatory uncertainty, strategic business considerations, and varying corporate narratives about privacy and safety shape why organizations often decline to factually disclose which AI platforms they use [1] [2] [4].

Want to dive deeper?
What are the potential risks of not disclosing AI platform information?
How do other companies handle transparency about their AI technology usage?
Can factually be held accountable for not disclosing their AI platform details?
What role does AI platform secrecy play in maintaining competitive advantage?
Are there any regulatory requirements for disclosing AI platform information?