What peer‑reviewed studies have successfully obtained and reanalyzed industry IPD for blockbuster drug trials, and what did they find?
Executive summary
A small, identifiable set of peer‑reviewed reanalyses have obtained industry individual‑participant data (IPD) from trials — notably the Restoring Study 329 project, a reanalysis of the TORCH trial, and a reanalysis of the SMART‑AF trial — and their conclusions ranged from overturning original claims to largely confirming them [1] [2]. Systematic surveys of EMA‑assessed trials found that despite public policies, only a handful of independent IPD reanalyses of pivotal studies have been published, and access delays and company refusals remain common [1] [3].
1. The blunt list: which peer‑reviewed reanalyses actually obtained industry IPD
Peer‑reviewed outputs that explicitly obtained and reanalysed industry IPD for high‑profile trials include “Restoring Study 329” (Le Noury et al.), a reanalysis of the TORCH trial, and a reanalysis of the SMART‑AF trial, each reported among the three reanalyses identified in a cross‑sectional study of EMA‑assessed trials and outputs from data‑sharing platforms [1]. The Restoring Study 329 effort relied on highly granular case report form‑level IPD to recategorize adverse events in Paxil trials in children and was discussed in PLOS Medicine as a notable example where access to original CRFs changed interpretation [2].
2. What those reanalyses found: overturned claims, attenuated effects, or affirmation
The Restoring Study 329 reanalysis contradicted the original publication by recoding adverse events (for example reclassifying some events as suicidality rather than milder categories), thereby altering the safety narrative for Paxil in children [2] [1]. The TORCH reanalysis suggested the original publication overestimated the treatment effect, while the SMART‑AF reanalysis reached conclusions similar to the original trial’s claims, showing that access to IPD can both challenge and corroborate industry results depending on the case [1].
3. How rare these successes are, and the practical barriers researchers face
A cross‑sectional study of EMA main trials found that among 88 published outputs derived from data‑sharing platforms, only three were full reanalyses, highlighting the rarity of independent IPD reexamination even where sharing mechanisms exist [1]. Investigators attempting to obtain IPD routinely face long request times and are sometimes denied on grounds such as “lack of scientific merit” under company procedures, a pattern documented in a BMC Medicine study that examined requests for EMA‑assessed trials [3]. PLOS Medicine and methodological reviews also stress that the availability and quality of IPD vary and that deep reanalyses sometimes require original CRFs and coding that are not always released [4] [2].
4. Why IPD reanalysis matters for blockbuster drug trials and the limits of current evidence
IPD enables reclassification of adverse events, subgroup‑level checks, and alternative analytic approaches that aggregate published summaries cannot replicate, which is why the Paxil case required CRF‑level IPD to reveal serious safety concerns missed or downplayed in the original reporting [2] [4]. Yet the empirical landscape shows that most pivotal trials for high‑revenue (blockbuster) drugs have not been independently reanalysed from IPD in peer‑reviewed literature, leaving open empirical questions about how often original inferences are robust across the full portfolio of blockbuster trials [1] [3].
5. The takeaway: cautious optimism, structural bottlenecks, and research priorities
The handful of peer‑reviewed reanalyses that did obtain industry IPD demonstrate both the power of reanalysis to change clinical interpretation and the reality that such work is exceptional rather than routine; transparency initiatives have increased access channels but have not yet produced a steady stream of independent IPD reanalyses for blockbuster trials because of logistical, procedural, and gatekeeping barriers [1] [3] [2]. Methodological guides for IPD meta‑analysis underline best practices for those reanalyses that do occur, but the literature makes clear that more consistent access to CRFs, protocols, and analysis code is required before reproducibility can be judged at scale for high‑stakes drugs [4] [5].