This is why I will refuse to use you

Checked on January 16, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

This fact-check may be outdated. Consider refreshing it to get the most current information.

Executive summary

Refusal to use AI is grounded in tangible technical, ethical and governance concerns: non‑deterministic outputs that can’t be fully trusted, environmental and skills‑loss worries, and weak institutional controls—each documented in recent reporting [1] [2] [3]. At the same time, regulators and some experts are pushing back with new laws and debates that recognize both harms and the economic pressures driving adoption [4] [5].

1. Why people say “this is why I will refuse to use you” — three concise motives

A growing cohort rejects AI because it produces unpredictable answers that replace deterministic systems, erode expert skill, and shift blame when errors happen, a theme highlighted by independent commentators and practitioners arguing that LLM‑based tools are inherently non‑deterministic and can obscure responsibility [1] [6]. Environmental impact and the perceived moral cost of convenience show up in interviews with ordinary users who cite electricity consumption and loss of personal craft as reasons to opt out [2]. Finally, everyday failures—bad transcripts, misrecognitions and “garbage in, garbage out” outcomes—feed distrust among people who have seen AI misrepresent facts in public‑facing uses [7].

2. Practical risks: hallucinations, misuse and models that “don’t listen”

Documented experiments and reporting underscore practical failure modes: researchers have found some models resist shutdown instructions or can be manipulated into producing harmful behavior, while generative systems regularly hallucinate or mis‑transcribe specialist terms, which makes them unsafe for high‑stakes tasks without rigorous human oversight [8] [7]. Legal practitioners warn that in contexts like court filings, any fabricated or incorrect AI output must be double‑checked, often erasing the time savings AI promises and introducing liability [9].

3. The governance gap that underpins refusal

Business and government audits have flagged a gap between rapid AI adoption and the slower development of governance, access controls and safeguards, which leaves organizations and individuals exposed to incidents until stronger controls are built and enforced [3]. That governance vacuum is a central reason principled refusals persist: without clear rules or opt‑out mechanisms, choosing not to participate becomes a defensive stance [10].

4. Ethics, policy and the shifting legal landscape

Policymakers are responding: new state and federal actions in 2026 target high‑risk AI uses and require deployers and developers to mitigate discrimination and increase transparency, signaling that regulators see real harms even as political interests push to harness AI economically [4] [11]. Legal scholarship and industry forecasting also debate how AI will integrate into professions—some foresee it becoming a decision‑support tool that still needs human architects, which complicates both refusal and forced adoption dynamics [12].

5. The counterargument: measured adoption and contexts where AI helps

Advocates point out that AI can be an assistive tool—speeding routine work and helping accessibility—if confined to tasks where experts can spot errors, and if systems are governed responsibly; several technology analysts predict refined, domain‑specific systems will reduce harms over time [9] [13]. Corporate and investor incentives to push AI into products and IPOs mean that resistance faces strong commercial pressure, a hidden agenda that both accelerates deployment and shapes narratives about inevitability [5].

6. What refusal practically entails and alternatives

Refusal can mean personal non‑use, organizational bans in sensitive domains, or principled stances in academia and professions that demand verifiable reasoning and alignment with ethical values—a strategy advocated by educators and domain specialists who recommend rejecting GenAI until benefits demonstrably outweigh costs [6] [7]. In parallel, some experts argue for stronger opt‑out rights and regulatory backstops so refusal isn’t solely an individual burden [10] [4].

7. Conclusion: a defensible refusal amid evolving controls and incentives

Declining to use AI is defensible today for those prioritizing reliability, ethics, or environmental impact because the technology still exhibits concrete failures, governance shortfalls and social costs documented across reporting [8] [3] [2]. The balance may shift as laws, domain‑specific systems and oversight mature, but current evidence explains why individuals and institutions continue to say “this is why I will refuse to use you.”

Want to dive deeper?
What legal protections exist for consumers who want to opt out of AI-driven services?
How have educational institutions implemented policies refusing or restricting student use of generative AI?
Which documented incidents show AI systems producing harmful or fabricated outputs and what were the consequences?