University Post
University of Copenhagen
Independent of management

Opinion

The Fallibility of the Scientific Method

Science — How do we react, if it turns out that the scientific method itself is fallible? Using three examples I will argue that this, in fact, is the case, although it is seldom spoken of.

This week we celebrate science. The Danish Science Festival is taking place all over the country, and on Sunday, we Marched for Science in both Copenhagen and Aarhus together with the rest of the world. Science is indeed an amazing endeavor which deserves celebration.

Science is indeed an amazing endeavor which deserves celebration.

If we really care about science […] we will have to accept its shortcomings, and be prepared to make changes.

Science is in essence the scientific method. It is the process of collecting data in a structured manner. It is applying critical thinking to the matter in front of you. It is the principle of fallibility and reproducibility. But how do we react, if it turns out that the scientific method itself is fallible? Using three examples I will argue that this, in fact, is the case, although it is seldom spoken of.

‘The Reproducibility Crisis’

One of the pillars of the scientific method rests on the replication of previous findings to determine the likelihood of them rightly reflecting Nature. However the number of published replication studies are terrifyingly low, and most scientists would agree that science is facing a ‘reproducibility crisis’. An article on a survey in Nature sheds light on the ‘crisis’ rocking research.

That it is in fact a problem has been demonstrated over and over again in various fields of research i.e. psychology and biomedical research (2,3).

Possible solutions are being developed. For example, it is now possible to apply for funding of replication studies at the Netherlands Organisation for Scientific Research (NWO) and the US National Science Foundation’s Directorate of Social, Behavioral and Economic Sciences. See a manifesto for reproducible science.

In our own Danish setting on the other hand, we have no such initiatives. Concerns are not even audible. Where are the voices concerned with ensuring reproducibility of Danish research? If hypothesis-testing research is not reproducible, it is not falsibiable, and thus cannot be regarded as knowledge – it is a wasted effort.

The significance threshold

Another example is the significance threshold – the renowned p<0.05 in many fields. How can we apply critical thinking to our subject of study, remain wary of conflicting research findings, though blindly accept an arbitrary line, distinguishing true from false.

Due to the current structure of research publication, there are flawed scientific practices

Truth is not binary, but much rather situated along the continuum of truthfulness, a continuum of the probability of findings actually reflecting the state of Nature. Other measures of importance could be statistical power and false-positive risk too. Statisticians are already aware of the inadequacy of condensing all this complexity into a single binary measure. See five ways to fix statistics.

Every student and every researcher should be put off just the same.

Publication bias

Finally, due to the current structure of research publication, there are flawed scientific practices. Not fraudulent practice, just flawed. Science is a human endeavor, and we as human beings are not perfect. We are biased, both consciously and unconsciously.

We like getting significant results. Which is reasonable enough: In order to continue receiving funding, we need to have our research published. In order to get our research published, we rely on positive results. This has been dubbed the ‘publication bias’. It is basically a structure with flaws embedded. But it is not something which cannot be overcome.

One way of resolving this issue for hypothesis-testing research (as opposed to hypothesis-generating research) is pre-registration. In short, it is an initiative in which the publisher promises to publish your research regardless of the outcome. It entails disclosing your research design up front and receiving peer review on the design, all to various degrees.

We encourage politicians to make science-based decisions, yet we remain static in our own ways

To sum up, there is a discrepancy between current and optimal structure; there exists a solution to the minimize this discrepancy. However, most researchers and policymakers are unaware of the discrepancy in the first place.

Conclusion

If we cannot accept that there is room for improvement regarding the scientific method, our legitimacy will slowly fade away. We encourage politicians to make science-based decisions, yet we remain static in our own ways. If we really care about science – possible the greatest human endeavor in all history – we will have to accept its shortcomings, and be prepared to make changes.

Enacting change is a mutual effort: It takes policymakers, funders, journals, scientists and students. I wholeheartedly encourage my fellow students to take a stand to improve the state of science. Change can be brought about in a bottom-up manner.

We, as students, have the unique opportunity of being unconditionally critical. In the spirit of science, we should be critical of science. Challenge our supervisors. Take the discussion with our peers. Be vocal. Together we can make better science.

 

Latest