Who are the most prominent skeptics of artificial intelligence?
Executive summary
Prominent public skeptics of AI include Gary Marcus, Geoffrey Hinton, Yoshua Bengio, Max Tegmark and a cohort of economists and commentators who question generative AI’s productivity gains and social effects (examples: Marcus’ sustained critique of LLMs [1] [2], Hinton’s shift from insider to outspoken critic [3], and warnings about uncontrollable “agent”‑style systems from Bengio and Tegmark [4]). Broader skepticism also appears among economists and commentators who doubt that recent tech investment will boost productivity, such as Martin Wolf and other expert surveys showing substantial reservations about AI’s longer‑term promise [5] [6].
1. The technical skeptic: Gary Marcus—arguing current LLMs are “inherently broken”
Gary Marcus is the clearest, most consistent public foil to Silicon Valley’s AI optimism, arguing that today’s generative models are “too flawed to be transformative” and that LLMs are “inherently broken” as a path to real intelligence; Marcus has been speaking this line in venues from Web Summit to his Substack and media interviews throughout 2024–2025 [1] [2] [7]. Marcus frames his critique as technical and methodological—he advocates different research directions (neurosymbolic approaches) and emphasizes long‑standing problems such as distribution shift and safety incidents tied to chatbots [2].
2. The insider turned alarmist: Geoffrey Hinton—risk from displacement and societal harm
Geoffrey Hinton, long regarded as a founding figure in deep learning, has moved from inside observer to outspoken critic, resigning from Google to speak more freely and becoming “one of the most prominent skeptics” about the social and economic consequences of AI, including warnings about massive unemployment and other harms [3]. Hinton’s prominence matters because his critiques come from someone who helped build the field now under scrutiny [3].
3. Safety scientists: Yoshua Bengio and Max Tegmark warn about “out of control” agents
Leading AI scientists such as Yoshua Bengio and Max Tegmark have publicly warned that agent‑style systems could become dangerous if creators lose control, explicitly calling attention to the risks of building systems that act autonomously at scale [4]. Their framing shifts the debate from narrow model faults to systemic control, governance and the architecture of future AI systems [4].
4. Economists and commentators: skepticism about productivity and economic impact
Skepticism isn’t limited to computer scientists. Financial and economic commentators question whether the AI investment wave will translate into broad productivity gains; Martin Wolf is cited as skeptical about whether tech investment boosts productivity even while conceding AI might prove him wrong, and expert polling shows many analysts doubt certain long‑term AI scenarios [5] [6]. These critics emphasize measurable economic outcomes rather than technological possibility [5] [6].
5. A wider movement: artists, writers, consumer advocates and technologists
Beyond named scientists, a coalition of writers, artists, tech ethicists and consumer groups has mobilized against particular generative AI uses—arguing that tools have been “forced onto people by billionaires,” threaten privacy and purchasing decisions, and raise copyright and creative‑labor issues [8] [9]. Those constituencies frame skepticism around social justice, consent and market power rather than purely technical limits [8] [9].
6. Competing perspectives and intra‑field debate
Critics like Marcus, Hinton, Bengio and Tegmark differ in emphasis: Marcus stresses technical inadequacy of current LLM methods [1] [2], Hinton warns of huge economic dislocation [3], while Bengio and Tegmark focus on loss of control of autonomous systems [4]. Proponents within industry push back—calling some skeptics “mediocre” or overly pessimistic—but those rebuttals are not detailed in the provided sources, which focus on the skeptics themselves [7] [1]. Available sources do not mention detailed rebuttals from named industry leaders beyond general tension (not found in current reporting).
7. What these skeptics agree on and what remains contested
Across these voices there is consensus that AI poses real risks—technical failure modes, economic displacement, loss of control and social harms—but disagreement exists on magnitude and timelines: some see eventual solutions or productivity gains, others see fundamental limits to current approaches [1] [3] [5]. The evidence cited in reporting ranges from technical critiques and public resignations to expert surveys, but no single source in this set offers a definitive measure of net benefit versus harm [1] [3] [5] [6].
Limitations: this summary draws only on the provided reporting; other prominent skeptics, rebuttals from industry, or longitudinal empirical studies are not present in these sources and therefore are not covered here (available sources do not mention broader lists or long‑term empirical comparisons).