Table from the 2010 ISCD study ranking various drugs (legal and illegal) based on statements by drug-harm experts. How legitimate is this study?

Checked on January 16, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

The 2010 Independent Scientific Committee on Drugs (ISCD) multicriteria decision analysis (MCDA) that ranked 20 drugs and put alcohol at the top is a legitimate, peer‑reviewed academic exercise with transparent methods and meaningful replication, but it is not an undisputed “final” measure of drug harm and has important methodological limits widely discussed in the literature [1] [2] [3].

1. What the study actually did and what it claimed

The ISCD convened experts, specified 16 harm criteria grouped into harms to users and harms to others, scored 20 substances on each criterion using an MCDA framework, and published the results in The Lancet showing alcohol, heroin and crack among the most harmful and MDMA, LSD and some psychedelics among the least harmful on the composite score [4] [5] [1]. The paper explicitly reported that its rankings correlated poorly with existing UK legal classifications, underscoring that policy categories do not mirror the study’s evidence‑based harm scores [1].

2. Strengths that underpin its legitimacy

The study was peer‑reviewed and appeared in a leading medical journal, used a documented, reproducible MCDA process that made its criteria, scoring and weighting explicit, and prompted at least one independent replication in the Netherlands and a later EU exercise that produced very similar overall rankings—strengthening the study’s external reliability [1] [3] [6]. Subsequent research has even adapted the ISCD scores for clinical research tools that associate composite harm scores with health outcomes, indicating practical utility of the framework [7].

3. Key methodological limits critics highlight

Critics and methodologists point out several nontrivial caveats: MCDA depends on which criteria and weights are chosen (value judgments), experts’ personal experience can bias scores (particularly for less familiar substances), and the original study did not model polydrug use or situational/contextual factors that change real‑world harm [8] [9] [10]. Academic critiques argue that ranking is useful but can give a false sense of precision when complex social, cultural and policy variables interact with drug use harms [10] [8].

4. Institutional context and possible agendas to note

The ISCD (now DrugScience) was formed to produce independent scientific assessments outside government bodies, and while the 2010 work lists funding from the Centre for Crime and Justice Studies, other histories note initial private support for the group—details relevant because independence and funding sources affect perceived impartiality [2] [11]. The study’s publication and headline claim that alcohol is “most harmful” had obvious policy and public‑message implications, which attracted both welcome debate and pushback from bodies that link classification to criminal penalties [1] [10].

5. Practical verdict: how legitimate is the study?

Legitimate as a transparent, replicable, peer‑reviewed attempt to systematize expert judgment about multifaceted harms, the ISCD 2010 MCDA should be treated as an influential evidence tool rather than an infallible metric; it reliably revealed that legal substances like alcohol and tobacco impose large aggregate harms but is limited by value choices, expert selection, lack of polydrug modelling and contextual situational factors—caveats the authors and subsequent reviewers openly acknowledge [1] [8] [10]. Policymakers and communicators should therefore use the study to inform, not dictate, drug policy and public health messaging, pairing MCDA outputs with epidemiological data, stakeholder input and attention to use contexts.

Want to dive deeper?
How has the ISCD/DrugScience methodology been updated since 2010 and what do later rankings show?
What are the main methodological alternatives to MCDA for assessing drug harms, and how do their conclusions differ?
How does polydrug use change harm rankings in comparative studies of substances?