Which major U.S. studies track domestic extremist fatalities by ideology and how do their methodologies differ?
Executive summary
Three types of major U.S. studies track domestic-extremist fatalities by ideology: government-syntheses and NIJ summaries drawing on multiple datasets, academic cross-ideological analyses using curated databases (notably the PNAS/START work built on PIRUS/BIAS), and policy-center compilations such as CSIS that assemble incident-level datasets; each reaches broadly similar conclusions about the predominance of right‑wing lethal violence but differs sharply on definitions, time windows, inclusion rules and coding procedures [1] [2] [3]. Those methodological choices — what counts as an “attack,” whether plots or planned-but‑failed incidents are included, how ideology is assigned, and which years are covered — drive variation between headline counts and explanations in the public debate [4] [5] [6].
1. The NIJ/Justice Department synthesis: government-level totals and the controversy over removal
The National Institute of Justice’s public write-up summarizes scholarship and datasets to conclude that “since 1990, far‑right extremists have committed far more ideologically motivated homicides,” citing specific tallies (227 events taking more than 520 lives) and contrasting far‑left counts (42 attacks, 78 lives) — findings that mirror peer research but became politically contested after a DOJ webpage containing the synthesis was temporarily removed from the department site in September 2025 [1] [7].
2. START/PNAS academic analysis: curated datasets, rigorous modeling, and relative lethality
The START team and the PNAS paper compared left‑, right‑, and Islamist‑motivated politically violent acts using coded datasets (including PIRUS and BIAS) and statistical models, finding left‑wing attacks substantially less likely to result in fatalities and Islamist attacks more likely than right‑wing attacks to be fatal; their approach emphasizes case-level coding, inter‑coder reliability and regression models (zero‑inflation, odds ratios) to estimate differences in lethality [2] [5] [4].
3. University of Maryland / CCJS work: incident‑level probabilities and comparative risk
A UMD‑led study co‑authored by Gary LaFree examined violence propensity across ideological milieus and reported that left‑wing attacks were about 45% less likely to produce fatalities than right‑wing attacks, framing the result as evidence policymakers should weigh the relative threat posed by right‑wing domestic terrorism — the study focuses on comparative probabilities rather than raw body counts [8].
4. CSIS and policy‑center datasets: timely counts, broader classification and shifting trends
CSIS’s incident dataset compiles attacks and plots (1994–2025 in recent work) and classifies incidents into categories such as “right,” “left,” “jihadist” and “ethnonationalist,” producing counts that have been used to claim both long‑term right‑wing predominance and recent upticks in left‑wing activity; CSIS’s emphasis is transparency of incidents and narrative trends, but its published counts can change with reclassification of motives and with different start/end dates [6] [3] [9].
5. Aggregators and secondary summaries: Statista, The Conversation, PBS — quick snapshots with different emphases
Aggregators and media summaries frequently report headline shares — for example, Statista summarized that 76% of domestic extremist killings between 2014–2023 were by right‑wing actors — while outlets like PBS and The Conversation synthesize government and academic findings to estimate that right‑wing violence accounts for roughly 75–80% of domestic‑terrorism fatalities since 2001; these pieces are useful for public framing but inherit the methodological choices of primary datasets [10] [11] [12].
6. Key methodological fault lines that explain divergent tallies
Studies diverge on at least four axes: whether they count only fatal attacks or include non‑fatal plots and planned violence (PIRUS counts planned violent intent differently than other databases) [5]; the ideological coding rules — how decentralized actors (e.g., “antifa” or lone actors) are classified — which shapes left vs right attribution [3] [4]; time windows and event selection (1990s‑present vs 1994–2025) that amplify or mute trends [1] [6]; and statistical treatment of zero‑fatality events and modeling choices that change conclusions about relative lethality [2] [5].
7. Bottom line and reporting caveats
Across government synopses, START/PNAS academic work, UMD analyses and CSIS incident compilations the consistent finding is that right‑wing attackers have accounted for the majority of U.S. domestic‑extremist fatalities in most multi‑decadal counts, but precise shares and recent trend narratives differ because of definitional choices, coding protocols and time frames; the sources document those differences but do not resolve every classification dispute, and public controversies — such as the DOJ page removal — underscore political sensitivity around presentation of the data [1] [7] [2] [6].