What methodological flaws did critics identify in the unpublished vaccinated vs. unvaccinated study featured in An Inconvenient Study?

Checked on January 6, 2026
Disclaimer: Factually can make mistakes. Please verify important information or breaking news. Learn more.

Executive summary

The unpublished Henry Ford vaccinated vs. unvaccinated study showcased in the film An Inconvenient Study was widely criticized for multiple, interlocking methodological flaws that reviewers said made its core conclusion—that vaccinated children had higher rates of chronic conditions—unreliable and biased [1] [2]. Independent statisticians and media fact checks pointed to problems with unequal follow‑up times, mismatched cohorts, small and non‑representative unvaccinated samples, unvalidated metrics and likely ascertainment bias driven by differences in healthcare utilization [3] [4] [5] [6].

1. Sample size and cohort matching were inadequate and uneven

Critics highlighted that the unvaccinated arm was much smaller than the vaccinated arm and the two groups were not properly matched on key baseline characteristics, a deficiency that undermines causal inference; Henry Ford Health itself noted the small unvaccinated sample and poor matching as grounds for abandoning the draft [7] [1]. Science Feedback and other evaluators recommended matched cohorts by birth year and confounders because the raw comparison of ~16,500 vaccinated to ~2,000 unvaccinated children can amplify bias if important covariates differ between groups [3] [5].

2. Differential follow‑up time produced misleading exposure windows

Multiple reviewers said the vaccinated children were observed for roughly twice as long as unvaccinated children, creating far more opportunity to detect diagnoses that often emerge after age four—an issue Dr. Jake Scott and others called “fundamental” and “fatal” to the study’s conclusions [4] [2] [3]. The Conversation’s biostatistician analysis underlined that up to 25% of unvaccinated children were tracked only until under six months, and half were tracked for less than 15 months, windows too short to capture many chronic or neurodevelopmental conditions [3].

3. Healthcare utilization differences created ascertainment bias

A central methodological criticism was that vaccinated children had substantially more clinic visits, meaning conditions were more likely to be diagnosed in that group while being missed among children who rarely accessed care—classic ascertainment bias that inflates associations without establishing causation [5] [2]. Science Feedback and other reviewers have repeatedly flagged this same problem in prior vaccinated/unvaccinated comparisons and warned that failure to control for healthcare‑seeking behavior renders the observed associations unreliable [6] [8].

4. Use of unvalidated metrics and potential analytical artifacts

Observers also pointed out the study relied on a novel, unvalidated metric to compare disease incidence across groups, and did not convincingly demonstrate that this measure reliably reflected true disease burden rather than charting differences in detection or coding practices [6]. Prior controversies in the literature—cited by Science Feedback—show how new metrics or inadequate controls produce misleading odds ratios in unadjusted observational analyses [9] [8].

5. Broader sampling and representativeness problems limit generalizability

Beyond internal biases, reviewers noted the sample was not representative of the wider population of U.S. children; historic critiques of similar studies emphasize non‑representative convenience sampling as a recurring flaw that weakens external validity [8] [9]. Independent, large‑scale studies such as national registries in Denmark and representative surveys have not replicated the claimed harms, strengthening the critique that this unpublished draft was an outlier driven by methodological shortcomings rather than new epidemiological truth [5] [2].

6. Institutional response, alternative viewpoints and the limits of public reporting

Henry Ford Health publicly stated the draft was abandoned after internal review found serious data and methodological flaws and therefore was never advanced for publication—an institutional claim echoed by fact‑checks [1]. Filmmakers and some advocates, by contrast, argue the paper was suppressed for political reasons and stand by the data; critics counter that publishing deeply flawed analyses would be irresponsible and that the methodological failures—unequal follow‑up, small unmatched controls, detection bias and unvalidated metrics—justify rejection pending rigorous redesign [7] [1] [5]. Where available reporting does not resolve motive, it does converge on technical deficits as the objective basis for scientific rejection [4] [3].

Want to dive deeper?
What specific statistical methods would correct for unequal follow‑up and ascertainment bias in vaccinated vs. unvaccinated cohort studies?
How have large, peer‑reviewed registry studies (e.g., Denmark) evaluated long‑term health outcomes after routine childhood vaccination?
What criteria do academic health systems use to determine when a draft study is unsuitable for submission or publication?