How reliable are studies comparing COVID death percentages by vaccination status given reporting biases?

Checked on January 18, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Studies that report COVID-19 death percentages by vaccination status can be informative but are frequently vulnerable to a constellation of reporting and design biases that complicate straightforward interpretation; observational vaccine-effectiveness literature warns that confounding, selection bias, misclassification of outcomes, and unmeasured prior infection can skew results [1] [2] [3]. Careful study design, transparent case definitions, and adjustment for testing behavior and prior infection are required before equating raw death-percentage comparisons with causal vaccine effects [4] [5] [6].

1. Why raw percentages mislead: the problem of low-specificity severe-outcome measurement

Many VE analyses of severe outcomes include “false” cases—hospitalisations or deaths labelled as COVID-19 that were not caused by SARS‑CoV‑2—which inflates outcome counts in vaccinated and unvaccinated groups alike and biases comparisons based on crude percentages [4]. When the event definition has low specificity (for example, routinely counting any death with a positive test as COVID-related), the vaccinated fraction among deaths can be distorted independently of true vaccine protection [4].

2. Confounding and unmeasured prior infection: an invisible epidemiologic force

Observational designs cannot randomize prior infection or exposure risk; unmeasured infection-induced immunity, which is often higher among the unvaccinated after waves of transmission, can make vaccinated groups appear less or more protected depending on how prior infection is or isn’t accounted for, a source of bias quantified in simulation and empirical work [3] [1]. Reviews of cohort and test‑negative study limitations document that failure to adjust reliably for prior infection or differential exposure will produce biased vaccine‑effectiveness estimates and thereby distort death‑percentage comparisons [2] [3].

3. Selection and testing behavior: who shows up in the data matters

Changes in who gets tested—rising at‑home testing, different care‑seeking by vaccinated vs unvaccinated, and health‑care‑utilisation anchoring—create selection bias because cases and deaths captured by surveillance are not a random sample of infections or severe outcomes [5] [7] [8]. National test‑negative and cohort studies have investigated these behavioural shifts and shown that differential testing or survey nonresponse can materially affect VE estimates used to compare death rates by vaccination status [6] [7].

4. Study design matters: TNDs, cohorts, and the danger of naive comparisons

Test‑negative designs and retrospective cohorts are widespread tools for VE assessment, but each has vulnerability to bias if assumptions aren’t met; for example, TNDs rely on similar care‑seeking across groups and can produce negative or biased VE values without controlling for prior infection or other confounders [3] [1]. Systematic appraisals find thousands of observational VE studies with heterogeneous methods, and WHO guidance stresses standardised approaches to reduce—but not eliminate—these biases [2] [9].

5. Reporting, publication, and outcome‑selection biases: the politics of results

Outcome reporting bias—selective presentation of favorable endpoints or analyses—remains a documented problem in vaccine literature, meaning some published comparisons emphasize particular metrics or stratifications that support prevailing narratives or sponsor interests [10]. Meta‑analyses and randomized trial syntheses can be robust, but they also show heterogeneity across settings and time that complicates applying pooled efficacy directly to observational death percentages [11] [12].

6. How reliable are cross‑tabulations of deaths by vaccination status in practice?

Such cross‑tabulations can be a starting point but are not, by themselves, reliable evidence of causal vaccine effects because they rarely account fully for misclassified outcomes, prior infection, differential testing and health‑care use, or selective reporting—each documented in the literature as a source of bias [4] [3] [7] [10]. Reliability improves when studies adopt robust designs (well‑defined cause‑of‑death criteria), collect prior‑infection history, adjust for care‑seeking and testing patterns, and transparently report sensitivity analyses; absent those safeguards, death‑percentage comparisons should be treated as hypothesis‑generating rather than definitive [1] [6] [2].

Want to dive deeper?
How does prior SARS‑CoV‑2 infection bias vaccine effectiveness estimates in test‑negative studies?
What definitions and protocols best distinguish deaths caused by COVID‑19 from incidental SARS‑CoV‑2 positivity in hospital data?
Which observational study designs and adjustments produce the most robust vaccine‑effectiveness estimates against death?