Does factually rely on AI

Checked on December 5, 2025
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Available reporting shows rising organisational and societal reliance on AI: corporate moves like Reliance Industries forming Reliance Intelligence signal large-scale investment and infrastructure builds (incorporation on Sept 9–10, 2025) [1]. Academic and industry analysis also links greater AI use to increased dependence on uncontrolled data sources and to cognitive offloading that can erode critical thinking [2] [3].

1. Big corporate bets make “reliance” literal

Reliance Industries has created a wholly owned AI arm, Reliance Intelligence, formalised in filings dated September 9–10, 2025, signalling a strategic pivot into AI infrastructure, partnerships and services [1]. Financial and market reporting ties that move to heavy capital plans and partnerships — including a Meta link and talk of gigawatt-scale data centres — which industry analysts and reporters describe as turning reliance on AI into a central business strategy rather than a marginal experiment [4] [5] [6].

2. Infrastructure scale creates systemic dependence

Coverage emphasizes scale: analysts project multi‑billion dollar spends to build AI-ready power‑hungry data centre capacity and brokerages expect multi‑year deployment timelines for gigawatt projects [4] [6]. Those same stories note that such concentrated infrastructure investments create systemic dependencies—on power grids, global semiconductor supply chains, and on a handful of model and cloud providers—reshaping where economic and regulatory leverage lies [4] [7].

3. The data problem: “reliance on uncontrolled and unverified sources”

Industry commentary warns modern AI’s hunger for data increases reliance on uncontrolled and unverified sources, which drives factual errors and misinformation when models “fill in the blanks” with statistical patterns rather than grounded evidence [2]. That framing comes from think‑tank and industry analysis and explains why many observers now say AI needs mechanisms to connect model outputs to external sources of truth to reduce factual mistakes [2].

4. Human side of reliance: cognitive offloading and the erosion of critical thinking

Academic coverage reports measurable cognitive effects: a 2025 study finds increased reliance on AI tools is linked to diminished critical thinking, attributing the decline to cognitive offloading—people outsourcing memory, analysis or verification to systems instead of exercising those faculties themselves [3]. This is not framed as absolute or universal, but as a documented trend that warrants policy and educational responses [3].

5. Mixed corporate evidence on outcomes and safeguards

While corporations and consultancies tout gains—nine in ten organisations using AI regularly per a 2025 McKinsey survey—reporting also shows uneven embedding of AI into workflows and gaps in safety practices across the industry [8] [9]. Reuters and other outlets highlight that many AI companies’ safety practices currently “fail to meet global standards,” undercutting claims that reliance is matched by adequate safeguards [9].

6. Competing framings: opportunity versus risk

Sources present two durable narratives. One stresses the economic opportunity: embedding AI at scale promises growth, efficiency and new services, and drives national and corporate investment [8] [7]. The other stresses societal risk: misinformation, environmental costs of large models, degraded critical skills, and corporate safety shortfalls [2] [3] [10] [9]. Both narratives are present in the reporting; neither is absent from the sources provided [8] [2] [3].

7. Where reporting is thin or silent

Available sources do not mention concrete, peer‑reviewed longitudinal measures showing how quickly individual critical‑thinking declines translate into population‑level harm beyond initial study findings (not found in current reporting). Sources also do not provide definitive causal links between a single company’s AI investments and nationwide dependence effects beyond plausible analyst projections (not found in current reporting).

8. What to watch next—practical indicators of escalating reliance

Watch for filings and spending disclosures on gigawatt data centres and power commitments, shifts in share of enterprise workflows run by AI (surveys like McKinsey’s), regulatory moves addressing AI safety gaps flagged by Reuters, and follow‑up studies on cognitive impacts and data‑grounding techniques that the industry touts as countermeasures [4] [8] [9] [3].

Limitations and takeaway: the sources document clear trends—major corporate AI unit launches, large projected infrastructure spending, and studies linking AI use to cognitive offloading—but long‑term societal causation and comprehensive safety remediation remain underreported. Readers should treat “reliance on AI” as a real, accelerating phenomenon with both economic upside and measurable risks, as reflected across the cited reporting [1] [4] [2] [3] [9] [8].

Want to dive deeper?
How can I verify which parts of a text were generated by AI versus written by a human?
What tools reliably detect AI-generated content as of 2025 and what are their limitations?
What legal or ethical issues arise when publications factually rely on AI-generated material?
How do major newsrooms and academic institutions label or disclose AI-assisted reporting?
What best practices should writers use to ensure accuracy when using AI for factual claims?