1165 København K
Tlf: 35 32 28 98 (mon-thurs)
Measuring scientists based on volume of citations skews research outcomes to the positive, new study suggests
Competition for research funding and academic positions based on the volume of citations gives overly positive research results. This is according to the British education news site Timeshighered.com.
A study, conducted by Daniele Fanelli of the University of Edinburgh, finds that researchers report more positive results for their experiments in US states with the highest academic productivity.
Competition for research positions and funding, »combined with an increasing use of bibliometric parameters to evaluate careers … pressures scientists into producing ‘publishable’ results,« explains Daniele Fanelli, Marie Curie research fellow at Edinburgh’s Institute for the Study of Science, Technology and Innovation.
»There is quite a longstanding discussion about whether this growing culture of ‘publish or perish’ in academia is actually distorting the scientific process itself,« explains Fanelli.
Read the University Post article about the newly implemented Danish bibliometry system here.
Fanelli’s study is the first to attempt to verify the distortion effect in scientific literature across all fields.
»In a random sample of 1,316 papers that declared to have tested a hypothesis in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors,« writes Fanelli in his paper Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data, published this week in the open-access journal PLoS ONE.
»Those based in US states where researchers publish more papers per capita were significantly more likely to report positive results,« he continues.
Positive results accounted under half of the total in Nevada, North Dakota and Mississippi.
However, states such as Michigan, Ohio, District of Columbia and Nebraska reported positive results for between 95 and 100 per cent.
According to the Fanelli, scientific papers are more likely to be accepted by journals if they report positive results that support a hypothesis.
He argues that negative results most likely »either went completely unpublished or were somehow turned into positive through selective reporting, post-hoc reinterpretation, and alteration of methods, analyses and data«.