Blog Archives: May 2016

John Oliver Has It Right – Not All “Science” Is Created Equal

Post by: Peter Vitiello, PhD 

Last week, HBO’s satirical comedy newsman, John Oliver, addressed how scientific studies can be misconstrued by the media, transforming their original findings into entertainment solely suitable for morning show banter. In addressing how junk science becomes overhyped, the episode did a great job exposing data replication issues limiting research investments.

(Click Here to View HBO's John Oliver Video Clip. Warning: Language may be inappropriate for some.)

Reproducibility became an important topic of discussion in 2012 when Amgen reported that they were unable to reproduce findings in 47 of 53 “landmark cancer papers” published in journals with an impact factor greater than five. Even when considering failure rates of pre-clinical studies, this 94% irreproducibility rate was shocking and served as the motivation for new standards. The US National Institute of Neurological Disorders and Stroke called for data reporting standards in regard to animal randomization, blinded assessment, sample-size estimation, and data handling and biological sex and reagent authentication have also been added to this list. However, there was also a clear delineation between how this rigor should be applied exploratory studies (early-stage observational tests) and more robust hypothesis-testing experiments.

In light of such issues, direct requests to access raw data and protocol details of published studies have increased. However, editors of the New England Journal of Medicine, which coincidentally has the highest retraction rate of any journal in the world, recently refereed to such scientists as “research parasites”. Casey Greene at the University of Pennsylvania Perelman School of Medicine used this opportunity to create the Research Parasite Award to recognize outstanding contributions to the rigorous secondary analysis of data (nominations are due by October 14). More transparent approaches for handling confirmatory and replicative studies performed by such “parasites” are being developed. Although Amgen’s 47 irreproducible studies remain anonymous, they recently released data on three failed studies through a new “Preclinical Reproducibility and Robustness” channel created by Faculty of 1000. Many other publishers are following suit as Elsevier recently announced a new “Invited Reproducibility Paper” as a new article type appearing in the data science journal, Information Systems.

In summary, I’m very excited to see greater data sharing and transparency through guidelines set by both funding agencies and publishers along with opportunities and respect for parasitic confirmatory studies. I hope that scientists embrace such expectations and do not use them as sole justification to dismiss creativity and novelty during peer review. My only fear is knowing that my harshest critics watch John Oliver and I’ll be expected to respond to these cynics at our next family gathering.

  1. Season 3: Episode 11. in Last Week Tonight with John Oliver (HBO, 2016).
  2. Begley, C.G. & Ellis, L.M. Drug development: Raise standards for preclinical cancer research. Nature 483, 531-533 (2012).
  3. Landis, S.C., et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature 490, 187-191 (2012).
  4. Oransky, I. & Marcus, A. For science to improve, let's put the right prizes on offer. (STAT, 2016).
  5. Baker, M. Biotech giant publishes failures to confirm high-profile science. Nature 530, 141 (2016).
  6. Chirigati, F., Capone, R., Rampin, R., Freire, J. & Shasha, D. A collaborative approach to computational reproducibility. Information Systems (2016).

— Published

Category: Redox Biology