Discover more from Still in the Storm
Ever Heard of the Reproducibility Crisis?
Why You Should Care That Most Published Studies Can't Be Replicated.
Imagine you get a prescription for a new medicine from your doctor.
Or you go to the store to purchase an over-the-counter drug.
We’ve all been there at some point in our lives.
We trust that those drugs are safe because they are backed by years of scientific research and have been through a rigorous regulatory approval process.
Now, what if it turns out that the data that was used to gain approval to enter into the clinical trials that led to the approval of those drugs cannot be reproduced? What would that mean?
Let’s go for a walk down one of the darkest halls in the history of science.
What do I mean by replication?
First things first, just what is meant by replication and how does it differ from reproducibility?
Replication is in many ways the cornerstone of true science. To confirm the results by repetition is essential. Environmental health scientists Stefan Schmidt summed it up as follows:
“It is the proof that the experiment reflects knowledge that can be separated from the specific circumstances (such as time, place, or persons) under which it was gained.” (Schmidt, 2009)
Without replication how do we know that the results aren’t entirely based on the biases of the scientists conducting them and the specific experimental systems whereupon they were conducted?
To have relevance it must be demonstrated that an experiment can be independently replicated outside of the original lab.
Simply put, replication is “the action of copying or reproducing something” (New Oxford American Dictionary). In this case the thing being reproduced is the experimental results.
At least that is the goal. Unfortunately, what follows in this article will show that for the majority of the time that is anything but the case.
In contrast, reproducibility is just about reproducing the same results from the same data set. Not trying to conduct the same experiment to see if you end up with the same data.
This is an important distinction to make before proceeding.
The CDC Said What?
In 2005, PLoS Medicine published the landmark essay by John Ioannidis whereby he shockingly claimed that “There is increasing concern that most current published research findings are false” (Ioannidis, 2005).
Say what? I thought we were supposed to trust the science, that it was settled. Might not want to jump on that until you’ve read the rest of this article.
While many, even a group from the CDC, agreed with Ioannidis’ notion they tried to come up with ways to salvage the smoldering wreckage of science that he presented.
The claim from the CDC was that replication was one of the best ways to increase the probability that a given research finding was true. Here is what they said:
“As part of the scientific enterprise, we know that replication—the performance of another study statistically confirming the same hypothesis—is the cornerstone of science and replication of findings is very important before any causal inference can be drawn.” (Moonesinghe et al., 2007)
You’re going to want to remember that last part about it being “the cornerstone of science”.
Several years later one of the greatest “hold my beer” moments in scientific history ensued when not one, but two biotech companies published their takes on what soon became known as “The Replication Crisis”.
What is The Replication Crisis?
A series of events in the early 2010s kicked off what came to be known as The Replication Crisis. Largely, it was the realization that the inability to replicate published studies was not only widespread, but way beyond what most scientists would have ever expected.
Outside of my own personal experience as a scientist, there are three main pieces that I believe characterize this crisis and boy are they doozies.
It’s one thing if a hole in the wall academic lab somewhere in the middle of nowhere reported that their experiments couldn’t be replicated. I think we can all probably appreciate that.
What if it happened in the center of the biotech industry?
In the early 2010s two companies, Bayer Healthcare and Amgen, reported on a stunning lack of replication that occurred within the walls of their own labs.
Let me back up real quick because you might be wondering why in the world they undertook such an endeavor.
Well, most drug companies (pharma or biotech) strive to build and keep robust pipelines. In order to do this, they must either acquire new drugs or find ways to identify new targets to develop drugs against.
Development of pharmaceutical drugs is extremely costly in time, resources and perhaps most of all, the monetary investment. To mitigate that risk, many companies have put in place in-house new target validation teams.
Many new ideas for drug targets come from published studies so, it is important to validate them before making a further significant investment. No sense in moving forward if it's not valid, right?
Well, it was apparent that a significant number of published studies were not able to be replicated internally at Bayer so, they decided to put a number to it.
They conducted a survey of the scientists within their target discovery group. The results were quite revealing.
Bottom line, in only 20-25% of projects were the published data confirmed by in-house replication experiments.
That means upwards of 85% couldn’t be replicated. (Prinz et al., 2011)
The prestige or impact factor of the journal did not impact the reproducibility of the data. This confirmed what scientists at Bayer were seeing but, these findings were still shocking, nonetheless.
Not to be one-upped, Amgen came along and decided to conduct a study of their own to evaluate whether or not they were able to confirm the results of published findings in-house.
Unlike Bayer, they actually conducted the replication experiments and they stacked the deck in favor of replication by selecting 53 “landmark” studies with new findings all in the oncology field.
The results were absolutely shocking, devastating even.
Of the 53 studies that were attempted, scientific findings were replicated in only 6 or 11%. That means almost 90% of the findings couldn’t be replicated. (Begley et al., 2012)
They knew it was bad, but nowhere near this bad.
Please kindly pick your jaws up off the floor so we can proceed. I’ll wait.
Ok, moving right along.
Finally, The Reproducibility Initiative co-founded by former geneticist Elizabeth Iorns raised enough money to do a study of their own. They set out to try and repeat selected experiments from 53 high-impact papers. Largely, their data confirmed that of Bayer’s and Amgen’s (Errington et al., 2021).
Together all of these serve as strong confirmation of Ioannidis’ theories.
Suffice it to say, we have a big problem. Some might even conclude that the system is in fact flawed.
I hope you can understand the gravity of the problem but, just to drive it home a little more I will do a quick riff on some of my own experience.
It’s not just theoretical for me, I’ve walked it
In addition to these reported replication efforts, over my 20 years as a scientist I generated a lot of ideas from published studies. I have always tried to verify any original findings prior to expanding upon them myself.
This would involve similar exercises to what I described above to try and replicate papers using the methods described in the paper itself as well as trying to come up with other ways to verify the original hypothesis.
Essentially, I spent a good chunk of my career looking face-to-face at this very issue and I’m sure the results will shock you.
Yup, you guessed it. Overwhelmingly, the published data was not reproducible.
And, this was across multiple scientific disciplines including virology and oncology. No small potatoes with limited implications.
Over my career a number of my colleagues reported similar findings as well, so that serves to demonstrate that it’s not isolated to me and a specific lack of ability on my part to replicate data.
The surprising thing that I’ve found is that although so many of my former lab mates are aware of and have run directly into this very issue they don’t see why it means the data is invalid or that the system is flawed.
But, how can it not be a huge problem if replication is the very cornerstone of science?
Let’s now explore some of the implications.
So what if most scientific papers cannot be replicated?
Let’s now consider why the lack of validity and reproducibility of scientific data can have far reaching ramifications.
First off, it’s not just one or two lowly papers we are talking about here. These were landmark papers in high impact journals that were proven invalid.
Every paper is underpinned by other peer-reviewed, published papers that provide context and rationale. They are what helped to frame the hypothesis, assuming there even is one, and tell the reader why they should care about a particular new claim.
If the great majority of papers are false or invalid then that means that when you pick up a given paper there is a high probability that not only are the claims in that paper not real but, every paper they’ve cited to back up their work is also not based on reality.
In fact, in a report from Serra-Garcia and Gnezzy they demonstrate with hard data that published papers in top journals which fail to be replicated are cited more that those that do replicate (Serra-Garcia and Gneezy, 2021).
Wow! Take a second and let that sink in.
For this reason alone, you shouldn’t trust “the science”. It’s a house of cards just waiting to collapse and that is only the start.
But, why would unreplicable papers even be accepted in the first place? If that seems a ridiculous notion it’s because it is.
The fact is that in most cases the reviewers don’t actually see the raw data that backs up what is submitted for publication.
So, in effect they are taking the word of the submitting authors that the data has been sufficiently replicated and is thus valid. Big mistake!
It is also possible that they make an exception for a topic that they find particularly interesting and thus lower their already low standards.
More on peer review in an upcoming article so, stay tuned.
I’ll leave you with one more thing to think about on this topic.
Consider the fact that many landmark papers are then used as rationale to develop new drugs or to design human clinical trials. That means that many new drugs and the studies used to test them in animals and humans are flawed from the start.
In fact, it turns out that clinical trials for new oncology drugs have the lowest success rate compared to other therapeutic areas. This is in large part due to such non-reproducible studies being used as part of the preclinical validation.
We are thus treating extremely sick people with drugs that are very likely to not do anything to help but will actually inflict harm upon them.
The only thing we are in effect giving them is false hope. What a travesty.
I hope you can appreciate why this needs to be fixed. Yet, the reality is that nothing much has been done since these initial replication studies were published.
10 years have passed and the situation has not improved. Why not?
The lack of reproducibility is a well-known issue so, why hasn’t it been addressed?
Well, here’s the kicker…
I mentioned this before but, it bears repeating.
Most scientists acknowledge that a lot of data cannot be reproduced but, they don’t see why that invalidates the claims being made nor does it mean that the system is flawed! Say what?
Here’s how Monya Baker put it:
“Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.” (Baker, 2016)
If most of the peer-reviewed published data can’t be replicated, then as John Ioannidis suggested, there’s a high probability that any associated findings are false.
Houston, we have a problem!
Although that’s not saying much for a scientific community that simply assumes claims in preclinical studies can be taken at face value.
But, why hasn’t anything been done?
It is a matter of incentive and in this case there is none. Richard Horton, the editor-in-chief of the renowned medical journal, The Lancet, put it this way:
“Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.” (Horton, 2015)
In a way you could say that they are actually incentivized to not fix the problem because that would greatly stunt productivity.
Still, we must find a path forward if we are to regain any trust in scientific data whatsoever.
What could we do now to try and fix it?
Any path towards rectification of this and the many other issues that plague science would have to start with providing scientists with incentives to do the necessary work.
Some of the biggest reasons for lack of replication are that there is not sufficient detail in the methods, nor are all of the reagents that were used actually listed.
Therefore, any solution would require the implementation of various checkpoints to ensure experimental procedures and required reagents are properly documented.
Again, it’s sad to say but any effort to meaningfully fix this has to start and end with significant incentives to not only drive but sustain it.
Perhaps the biggest question is whether this is even possible inside a system that has become so bloated and corrupted.
A system that already controls much of the information that is put out by the media, Hollywood, television, radio, books and more.
It is my belief that it will have to start with communities of individuals like us that are willing to take a stand.
We believe that fostering open discussion is critical in these times.
So, moving forward each new Still in the Storm post will end with an inspiring prompt to get you started. Submit a comment below to join the conversation.
Is this the first time you’ve heard of the Reproducibility Crisis?
Were you aware that the problems plaguing scientific research today were this bad?
How does this impact your life and the decisions you will make moving forward?
Do you know someone that could benefit from this?
If you find the information in these articles valuable, we would be grateful for your help to get it to those who could most benefit.
Just click the button below to share and restack today’s post!
Get notified when new posts go live!
Lastly, to be notified as soon as a new post is live go ahead and subscribe to Still in the Storm by clicking the button below.
Schmidt S (2009). "Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences". Review of General Psychology. SAGE Publications. 13 (2): 90–100. doi:10.1037/a0015108
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLOS Medicine, 2(8), e124. https://doi.org/10.1371/JOURNAL.PMED.0020124
Moonesinghe, R., Khoury, M. J., & Janssens, A. C. J. W. (2007). Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Medicine, 4(2), e28. https://doi.org/10.1371/JOURNAL.PMED.0040028
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: how much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery 2011 10:9, 10(9), 712–712. https://doi.org/10.1038/nrd3439-c1
Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature 2012 483:7391, 483(7391), 531–533. https://doi.org/10.1038/483531a
Errington, T. M., Mathur, M., Soderberg, C. K., Denis, A., Perfito, N., Iorns, E., & Nosek, B. A. (2021). Investigating the replicability of preclinical cancer biology. ELife, 10. https://doi.org/10.7554/ELIFE.71601
Serra-Garcia, M., & Gneezy, U. (2021). Nonreplicable publications are cited more than replicable ones. Science Advances, 7(21). https://doi.org/10.1126/SCIADV.ABD1705/SUPPL_FILE/SCIADV.ABD1705_SM.PDF
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 2016 533:7604. https://www.nature.com/articles/533452a
Horton, R. (2015). Offline: What is medicine’s 5 sigma? The Lancet, 385(9976), 1380. https://doi.org/10.1016/S0140-6736(15)60696-1
Here’s the Last Post In Case You Missed It
If you like this post why not share and restack it?