What methodological biases should be examined in vaccinated vs. unvaccinated observational studies?

Checked on January 9, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Observational comparisons of vaccinated and unvaccinated populations are fertile ground for misleading conclusions unless a range of methodological biases are actively sought and addressed; the literature repeatedly flags healthy vaccinee and confounding-by-indication as central problems [1] [2]. Other recurrent distortions—immortal time and case‑counting window biases, detection and misclassification errors, selection/depletion effects, and outcome‑reporting bias—can push estimates of vaccine effectiveness in either direction and have shaped critiques of high‑profile COVID-19 and influenza VE studies [3] [4] [5].

1. Confounding by indication and the healthy vaccinee effect: who chooses vaccination matters

When sicker or more exposed people preferentially receive—or avoid—vaccination, observed mortality or disease differences reflect baseline risk, not vaccine effect; systematic reviews of influenza VE and cohort studies of SARS‑CoV‑2 show these biases frequently emerge and can exaggerate or attenuate VE estimates depending on which way the selection runs [1] [6] [2]. Peer reviewers have noted that matching on comorbidities can mask complex selection patterns—sometimes vaccinated groups appear healthier by measured indicators while other analyses find the reverse—so simple covariate adjustment is often insufficient [7].

2. Immortal time and asymmetric case‑counting windows: a technical trap that inflates effectiveness

Cohort studies that start counting cases only after a post‑vaccination window for the vaccinated arm—but cannot apply a symmetric window to unvaccinated people—create “immortal time” that removes early events from the vaccinated numerator and can make an ineffective vaccine look substantially protective [3] [4]. Methodological critiques have demonstrated this counting bias can change apparent VE by large margins and even reverse conclusions when not corrected [3] [4].

3. Detection bias, healthcare‑seeking behavior and misclassification: what is recorded is not always what happened

If vaccinated people use health services more (or less) than unvaccinated people, conditions will be differentially diagnosed; test‑negative designs attempt to control for this by enrolling only care‑seekers, but they still fail unless enrollment and testing criteria are stringent [5] [8]. Vaccination status misclassification and differential testing patterns also distort odds ratios and cohort incidence rates, a concern repeatedly raised in methodological reviews of COVID‑19 VE studies [3] [5].

4. Selection bias, depletion of susceptibles, and changing coverage over time

Early in rollouts or in pockets of low uptake, vaccinated and unvaccinated groups systematically differ—by age, comorbidity, occupation, or access—which shifts VE estimates; as coverage rises, the composition of the unvaccinated pool can skew toward those who are healthier or alternatively toward those with contraindications, producing over‑ or under‑estimation [2] [9]. The “depletion of susceptibles” phenomenon—where high‑risk unvaccinated people are removed by early events—can make later comparisons misleading [7].

5. Design‑specific pitfalls: cohort, case‑control, test‑negative and their trade‑offs

No observational design is bias‑free: cohort designs are vulnerable to immortal time and selection biases, case‑control studies depend on representative controls, and the popular test‑negative design can still mislead when asymptomatic infections or differential testing patterns are included [3] [5] [9]. Systematic reviewers recommend triangulating evidence across designs and seeking lab‑confirmed endpoints or quasi‑randomized approaches when feasible [1].

6. Outcome reporting, publication bias and ideological agendas

Selective reporting and outcome‑presentation choices can magnify apparent effects or erase inconvenient signals; critiques of both pro‑ and anti‑vaccine studies have documented outcome reporting biases that align with authors’ or sponsors’ narratives, underscoring the need to inspect protocol departures and full datasets [10] [11]. Media and advocacy outlets also highlight methodological holes selectively—some articles emphasize flaws to undermine findings, others downplay limitations—so explicit appraisal of incentives and conflicts is necessary [12] [10].

7. Practical checklist for appraisal and a final word

Critical appraisal should ask: were vaccinated and unvaccinated comparable at baseline and over time; was counting time and case definition symmetric; were testing and healthcare‑seeking behaviors accounted for; how was vaccination status ascertained and misclassification handled; were sensitivity analyses presented for depletion and immortal‑time effects; and is there transparency about protocol and outcome selection [3] [5] [1]. The literature makes clear that addressing a single bias is rarely enough—robust inference comes from multiple designs, transparent reporting, and sensitivity analyses that quantify how much biases could change conclusions [1] [2] [4].

Want to dive deeper?
How does the test‑negative design try to control for healthcare‑seeking behavior and what are its known limitations?
What statistical methods and sensitivity analyses can correct for immortal time and depletion‑of‑susceptibles biases in cohort VE studies?
Which real‑world VE studies have been re‑analyzed to account for healthy vaccinee bias, and how did results change?