Clinical trials are major undertakings — time-consuming, expensive, and with no guarantee of obtaining the desired results. With the average cost of a clinical trial in the tens of millions, over two-thirds of US clinical trials are funded by industry sponsors (1,2). But does that involvement influence the outcomes of drug trials? New research suggests that it does, with drugs appearing significantly more effective in trials with manufacturer involvement than in those without (3).
Drug déjà vu
Ohio State University health economist and study author Tamar Oostrom first began to notice discrepancies while reviewing psychiatric drug trials. “I was reading papers on antidepressants and started to get déjà vu,” said Oostrom. “One day, I’d read a paper comparing two drugs, and the next, I’d read another that compared the same drugs with a different funder — and different results.”
One day, I’d read a paper comparing two drugs, and the next, I’d read another that compared the same drugs with a different funder — and different results.
- Tamar Oostrom, Ohio State University
To look more deeply into this phenomenon, Oostrom decided to take a closer look by focusing on clinical trials for major depressive disorder. She gathered data on every available double-blind, randomized controlled trial for antidepressant or antipsychotic drugs in adults between 1979 and 2015, obtaining the original publications where possible and relying on meta-analyses and other reports elsewhere. The results were clear: In trials sponsored by a drug’s manufacturer or marketer, researchers reported that drug to be 49 percent more effective than in trials with the same comparators, but no manufacturer or marketer involvement. Not only that, but sponsor-funded trials were also 43 percent more likely to report statistically significant improvements and 73 percent more likely to find their drug more effective than any other.
Uncovering publication bias
Initially, Oostrom suspected that this bias arose from the trial design, but after examining the characteristics of the trials and their patient populations, she found little evidence that these factors were influencing outcomes. Instead, most of the effect came from publication bias.
“By publication bias, I mean that trials in which the manufacturers are involved are more likely to be published when they find positive results than when they find neutral or negative results,” Oostrom explained. “The relationship between effect size and publication probability is much weaker for trials that have other funding.” In fact, when Oostrom included the results of unpublished trials in her analyses, most of the sponsorship effect disappeared.
Although the analyses controlled for trial length, total enrollment, drug dosage, patient age, gender, and condition severity, a small portion of the bias effect remained unexplained. “There are lots of things I didn’t observe about these trials,” Oostrom said. “I don’t have reliable information on side effects or other attributes that are not well-recorded in these papers. I also can’t rule out any manipulation of results. Most of the effect arises from publication bias, but the rest could potentially be explained by these unobserved factors.”
Mitigating the sponsorship effect
Limiting the analyses to psychiatric drug trials offered Oostrom a unique opportunity to compare trials that differed only in their funding. Because treatments for major depressive disorder have faced controversy around their effectiveness, many researchers have run similar trials using the same drugs and patient populations, something that rarely happens in other conditions.
“The study’s focus on psychiatric drugs may limit its findings’ generalizability to other drug classes or conditions,” warned Kush Dhody, the President of the contract research and consulting organization Amarex Clinical Research, who was not involved in Oostrom’s work. “Additionally, although the study provides strong evidence of publication bias, it does not fully capture the extent of unpublished trials, leaving questions about the true efficacy of drugs.”
For now, policy changes are the most likely route to reducing the sponsorship effect. Industry funding is crucial for these trials, so instead of removing sponsors, Oostrom and Dhody recommended mandating the publication of all results, whether positive or negative, alongside ensuring that trials use careful design and strong ethical oversight to preserve scientific integrity. “By adopting these measures,” Dhody said, “the research community can enhance clinical trials’ reliability and foster more trustworthy results, benefiting both regulatory decisions and patient care.”
References
- Moore, T.J. et al. Estimated Costs of Pivotal Trials for Novel Therapeutic Agents Approved by the US Food and Drug Administration, 2015-2016. JAMA Intern Med 178, 1451–1457 (2018).
- Siena, L.M. et al. Industry involvement and transparency in the most cited clinical trials, 2019–2022. JAMA Netw Open 6, e2343425 (2023).
- Oostrom, T. Funding of clinical trials and reported drug efficacy. J Political Econ 132, 3298–3333 (2024).