Who are the leading figures warning about AI extinction risks?
Executive summary
A distinct but overlapping group of AI researchers, company chiefs and public intellectuals have publicly warned that advanced AI could — in some scenarios — pose an extinction-level threat to humanity, signing or endorsing short, high-profile statements that call for global attention and regulation [1] [2]. The roster includes founding and senior figures from major labs as well as independent academics; their warnings coexist with prominent skeptics, independent analyses that find the pathway to extinction technically challenging, and debates about motives and emphasis [3] [4] [5].
1. Who signed the 2023 “risk of extinction” statement: industry CEOs and leading researchers
The June 2023 one-sentence statement hosted by the Centre for AI Safety — “Mitigating the risk of extinction from AI should be a global priority…” — was explicitly signed by senior figures including Sam Altman (OpenAI) and Geoffrey Hinton, and endorsed by dozens more across companies and academia, signalling that top executives and leading scientists publicly accept extinction as a possible risk [1] [2].
2. The “godfathers” and technical pioneers: Hinton, Bengio, and others
Geoffrey Hinton — a widely cited “godfather of AI” who left Google to speak openly about risks — has given quantified estimates (for example, telling reporters he sees non-negligible odds of extinction in coming decades) and is repeatedly named among those calling for caution [6] [7]. Yoshua Bengio, another Turing Award laureate and deep‑learning pioneer, also signed the public warning and has voiced concern about long-term dangers from advanced models [2].
3. Current CEOs and lab heads: Altman, Hassabis, and corporate science officers
CEOs and chief scientists at the major labs put their names behind the warning: Sam Altman of OpenAI is a signatory and has publicly proposed governance measures, and Demis Hassabis of Google DeepMind and Microsoft’s senior technology officers were reported as endorsing similar statements, highlighting that some corporate leaders now frame existential risk as a governance problem [1] [3] [2].
4. Entrepreneurs and activists: Musk, Wozniak, and broad coalitions
Longstanding public skeptics such as Elon Musk and Steve Wozniak were signatories on earlier, broader calls for pauses or stronger oversight of advanced model training, reflecting a coalition that spans startup founders, industry veterans and concerned technologists who link existential risk to rapid deployment and competitive pressures [8] [9].
5. Safety researchers and former insiders: advocates pushing for technical and policy fixes
Researchers who study model safety — including those who left large firms or who now work at organizations like Hugging Face — figure among signatories and commentators urging both technical research and policy interventions; Margaret Mitchell is one cited voice urging attention to long‑term consequences even while warning immediate harms also require focus [9].
6. Skeptics and independent analyses: LeCun, Narayanan and RAND’s caution
Leading voices dispute the immediacy or plausibility of extinction scenarios: Yann LeCun has publicly dismissed apocalyptic takes as overblown, and Arvind Narayanan has warned against letting sci‑fi scenarios distract from near‑term harms [4]. Independent studies, notably a RAND scenario analysis, conclude that while extinction can’t be emphatically ruled out, it would be “immensely challenging” for AI to create an extinction threat and they identify specific capability thresholds and indicators to watch for [5].
7. Motives, messaging and the political economy of alarm
The coalition warning about extinction includes actors with both regulatory incentives (to push for guardrails) and commercial incentives (companies seeking influence over standards), and critics have alleged that proclamations by industry leaders can simultaneously hype capabilities and divert attention from present harms like bias, surveillance and misinformation — an argument voiced by commentators and some signatories themselves [3] [9].
8. What this roster means for policymaking and public debate
The list of leading figures warning about extinction — a mix of Nobel/Turing‑level researchers, lab executives and prominent entrepreneurs — has successfully reframed the conversation to include long‑term catastrophic scenarios, but the controversy between existential alarms and skeptical, capability‑focused research (e.g., RAND) means policy responses are being debated along two axes: near‑term governance of deployed harms, and longer‑term research and surveillance to detect indicators of catastrophic capability [5] [10].