What methodologies do studies use to estimate excess mortality linked to sanctions in Venezuela, and where do they disagree?
Executive summary
Studies estimating excess mortality linked to sanctions on Venezuela deploy a mix of survey-based excess-death calculations, macroeconomic counterfactual modeling, and import/health-service flow analysis; they diverge sharply on data sources, causal attribution, and the validity of counterfactual assumptions (for example, CEPR’s 40,000‑death estimate vs critics who say methodology is unfit or that deterioration predated sanctions) [1] [2] [3] [4]. The dispute centers less on whether Venezuelans suffered a public‑health catastrophe—a point most agree on—and more on whether and how sanctions can be causally isolated from long‑running economic mismanagement and suppressed official data [5] [6].
1. Survey-based excess‑mortality: counting deaths from Encovi and similar household studies
A leading strand of work takes Venezuelan household surveys—most prominently the National Survey of Living Conditions (Encovi), administered by three universities—and translates observed year‑to‑year rises in mortality into “excess deaths,” an approach used by CEPR to estimate roughly 40,000 excess deaths for 2017–18 by applying observed mortality increases to population counts and attributing a large share to sanction effects [2] [1]. Proponents argue household surveys capture mortality when official reporting is suppressed and can reveal sudden spikes tied temporally to intensified financial sanctions [2] [7]. Critics counter that survey coverage, recall bias, sampling design, and the absence of a rigorous counterfactual—what would have happened absent sanctions—limit the strength of causal claims from this method [4] [3].
2. Macroeconomic counterfactual modeling: from lost oil revenue to lives lost
Another methodology constructs counterfactual macroeconomic trajectories—estimating lost oil revenues, trade and banking disruptions, and then linking those economic shortfalls to health inputs (food imports, medicines, hospital functioning) and mortality using elasticity assumptions or historical relationships between income and health outcomes [8] [9]. Papers and briefings using this approach attribute a substantial portion of mortality to frozen PDVSA funds, disrupted imports, and banking blockages and then translate those shocks into excess deaths [8] [9]. Methodological objections focus on model sensitivity: different assumptions about how quickly incomes or imports would have recovered, or about pre‑existing economic decline, produce very different death totals [4] [3].
3. Import‑flow and health‑system access analyses: tracing medicine and service bottlenecks
Some studies examine changes in imports of medicines, food, and medical inputs or documented interruptions to health programs (vaccination, HIV antiretrovirals, dialysis) and then infer mortality impacts from observed shortages and disrupted care, supported by clinical reporting and UN or NGO field findings [8] [7]. This methodology is persuasive on mechanism—sanctions can choke payments and supplies—but it relies on linking shortages to mortality rates, which requires additional epidemiological or modeling steps where assumptions about dose–response matter and are contested [8] [5].
4. Where methodologies disagree: data, counterfactuals, and political priors
Disagreements fall into three clear fault lines: first, data provenance—survey estimates (Encovi) vs. scant official health statistics where suppression is reported—creates diverging baselines and uncertainty about trends [2] [5]. Second, counterfactual construction—some models assume sanctions reversed prior gains and caused a sharp shortfall, while critics argue mortality trends began earlier under mismanagement and would likely have worsened regardless, invalidating some causal attributions [1] [6] [4]. Third, methodological fitness: peer reviewers and critiques (Brookings, SSRN paper) argue that specific statistical choices or coding in some influential studies are flawed or unsuitable to isolate sanction effects, while proponents (CEPR, Lancet commentary supporters) insist that multiple lines of evidence point to a substantial sanction contribution [3] [4] [8].
5. Reading the debates: competing agendas and what remains unknowable
Analyses are entangled with political stakes—advocates of lifting sanctions emphasize mortality attribution (CEPR, Lancet letters), while analysts critical of that framing highlight pre‑2017 deterioration and methodological weaknesses (Brookings, SSRN, Caracas Chronicles revisions) [1] [4] [10]. Independent observers note the practical constraint that suppressed official data and polarized research environments make definitive causal attribution impossible with current public evidence; researchers therefore differ on plausibility, not basic humanitarian facts [5] [3]. Where certainty is absent, the most robust inference across methods is that sanctions plausibly aggravated shortages and health‑system function but quantifying an exact death toll depends heavily on contested data and counterfactual assumptions [8] [9].