How many peer-reviewed cohort studies compare completely unvaccinated children to fully vaccinated children for long-term health outcomes, and what are their sample sizes?

Checked on February 6, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

Peer‑reviewed cohort research that directly compares completely unvaccinated children to fully vaccinated children for long‑term health outcomes is extremely limited; the published literature contains only a handful of studies that include entirely unvaccinated cohorts, and those that exist are small, observational, and carry notable methodological caveats (selection bias, retrospective design, mixed definitions of “vaccinated”) [1] [2] [3] [4]. Large, population‑level cohort studies typically compare undervaccinated versus age‑appropriately vaccinated children rather than truly unvaccinated groups [5], and several larger claims of extensive “unvaccinated cohorts” remain unpublished or methodologically disputed [6].

1. The clearest peer‑reviewed examples and their sample sizes

A commonly cited peer‑reviewed analysis by Mawson and colleagues that surveyed homeschooled children reported an initial study population often referenced as 666 children in some publications and summaries; that dataset was described as including a modest number of entirely unvaccinated children but the convenience sampling and parental‑report design limit causal inference [1] [7]. A separate paper by Hooker and Miller published in SAGE Open Medicine analyzed data drawn from three U.S. pediatric practices but explicitly noted that low overall vaccine uptake in those practices “obviates our ability to do a comparison between fully vaccinated and unvaccinated children within this cohort,” while reporting a total cohort size used in parts of the analysis of 666 patients for some comparisons [2] [1]. Another analysis attributed to Hooker and collaborators reports 1,929 surveys completed with 1,565 children after exclusions and presents findings stratified by “any vaccine received” versus none; that manuscript appears in an open‑access outlet (OAText) and treats vaccinated as any vaccine exposure rather than a strict fully vaccinated schedule comparison [3] [8].

2. What “cohort” means in these studies — and why that matters

Several of these studies are retrospective convenience cohorts or survey‑based cohorts rather than prospective birth cohorts with randomized or population‑representative sampling; they often group “any vaccine” versus “none” and therefore do not cleanly isolate children who completed the full recommended schedule versus those with zero vaccines or partial schedules [2] [3]. Government and scientific panels have repeatedly noted that rigorous evaluation of the complete immunization schedule versus no vaccination requires carefully constructed cohorts and remains largely unaddressed in high‑quality, longitudinal surveillance [4] [6].

3. Large administrative cohorts compare undervaccination, not total non‑exposure

Population‑scale cohort work does exist — for example, Glanz and colleagues analyzed 323,247 children to study undervaccination and healthcare utilization — but these projects compare varying degrees of undervaccination to on‑schedule vaccination rather than a cohort of completely unvaccinated children followed long term [5]. That distinction is important: undervaccinated groups include children who received some vaccines and therefore are not comparable to a true all‑unvaccinated cohort [5].

4. Unpublished or contested claims of larger “unvaccinated” cohorts

There are submissions and white papers claiming larger cohorts of totally unexposed children (including one described as the “largest cohort of children completely unexposed to vaccination” observed up to 18 years), but those reports are not peer‑reviewed journal publications and carry caveats about retrospective confounding and incomplete adjustment for covariates [6]. Regulatory and advisory bodies such as the National Academy of Medicine have explicitly called for well‑designed studies to fill the gap because existing evidence is insufficient to draw definitive conclusions [9] [4].

5. Bottom line and research implications

In sum, the peer‑reviewed literature contains only a very small number of cohort‑style analyses that include wholly unvaccinated children, and their effective sample sizes of strictly unvaccinated versus fully vaccinated children are small, variably reported (examples: analyses using cohorts of ~666 in some published analyses and a separate survey cohort reduced to 1,565 after exclusions) and frequently do not permit a clean, adjusted comparison of “fully vaccinated” versus “completely unvaccinated” over the long term [1] [2] [3]. Major population cohorts instead compare degrees of undervaccination and remain much larger but not directly relevant to the binary question; authoritative bodies continue to call for rigorous, prospective studies designed to answer this precise question [5] [4].

Want to dive deeper?
What large population cohort studies have compared undervaccinated to fully vaccinated children, and what outcomes did they measure?
Which methodological designs are required to reliably compare fully vaccinated versus completely unvaccinated children for long‑term outcomes?
What has the National Academy of Medicine and CDC recommended or planned regarding studies of the complete childhood immunization schedule?