How do researchers define and code 'left‑wing' and 'right‑wing' terrorism in major databases?
Executive summary
Researchers define and code "left‑wing" and "right‑wing" terrorism by combining an ideological rubric (motives and goals) with operational rules about evidence and case inclusion; major academic databases rely on public records, media reports, court documents, and a codebook that maps stated or inferred goals onto categories such as left, right, religious, or ethnonationalist [1] [2] [3]. These coding choices—not just raw incident counts—shape conclusions about frequency, lethality, and policy priorities and therefore draw scrutiny and competing interpretations from academics, government agencies, and advocacy groups [4] [5].
1. How ideology is defined: motives and political goals
Scholars operationalize "left‑wing" terrorism as violence motivated by opposition to capitalism, imperialism, or colonialism; by support for causes like environmentalism, animal rights, LGBTQ+ rights, black nationalism, communism/socialism, anarchism, or anti‑fascist/“anti‑authority” rhetoric, whereas "right‑wing" typically includes ultranationalist, racist, white‑supremacist, anti‑immigrant, anti‑government, and other far‑right doctrines—definitions appear explicitly in public codebooks and briefs such as CSIS’s dataset methodology [6] [3] [7]. These definitional outlines are normative choices: they decide whether a single‑issue violent actor (e.g., eco‑sabotage) is classified as left‑wing or "other" based on ideological roots advertised or inferred by sources [6] [8].
2. Evidence standards: what counts as an incident or actor
Major datasets require demonstrable links between violence and ideology, using publicly available court documents, newspaper accounts, and published sources to code attributes; common inclusion rules are arrest/indictment for ideologically motivated offenses, deaths linked to ideological activity, affiliation with designated organizations, or association with leaders indicted for political violence [2] [1]. The Global Terrorism Database, START, and PNAS studies emphasize multi‑stage collection—automated article sweeps followed by human analyst review and a published codebook—to reduce arbitrary classifications, but reliance on media and legal records creates selection effects toward incidents that gain public or prosecutorial attention [1] [9].
3. Categorization rules and mutually exclusive coding
Datasets typically force incidents into one primary motivational bucket—left, right, religious, ethnonationalist, or other—so mixed‑motive events are resolved according to codebook rules or dominant expressed goal, which can obscure cross‑cutting drivers or tactical mimicry [3] [1]. Scholars note that coding schemes sometimes collapse diverse groups (for example, anarchists, environmentalists, and animal rights militants) into "left‑wing," and likewise group militias, white supremacists, and anti‑immigrant actors as "right‑wing," choices that affect comparative analyses of lethality and frequency [8] [6].
4. Measurement consequences: frequency, lethality, and narratives
Because right‑wing actors have been responsible for a majority of incidents and fatalities in many U.S. datasets, analysts conclude right‑wing violence is more frequent and deadlier in recent decades; peer‑reviewed work and syntheses report that right‑wing attacks account for a disproportionate share of deaths, though left‑wing incidents have risen from low levels in some years [4] [5] [6]. These empirical patterns, however, depend on coding thresholds (what qualifies as terrorism versus hate crime or protest violence), and federal labels influence resource allocation and which incidents are cataloged in which datasets [4] [5].
5. Sources of disagreement and methodological blind spots
Debates center on inclusion rules, source biases (media attention, prosecutorial discretion), handling of mixed motives, and political definitions; critics argue that broader definitions can over‑inflate one side or undercount covert groups, while defenders emphasize transparent codebooks and reproducible protocols such as those archived with PNAS and START [1] [9]. Researchers acknowledge limits: datasets are shaped by what is reported and legally recorded, and no source alone captures the full universe of politically motivated violence—an admitted constraint in the methodology sections and codebooks cited [2] [1].
6. Hidden agendas and practical implications
Coding practices carry political weight: policymakers, advocacy groups, and media can use selective framings to prioritize threats, and academic teams disclose their categorization rules precisely to allow scrutiny; institutions like CSIS and START publish methodologies so users can judge how definitional choices produce particular threat pictures [6] [9]. Where datasets diverge, readers should look to the codebook rules and inclusion criteria—those are the levers that turn qualitative judgments into counts that shape counterterrorism priorities [1] [3].