Science is considered a source of truth and the importance of its role in shaping modern society cannot be overstated. But in recent years science has entered a crisis of trust.
The results of many scientific experiments appear to be surprisingly hard to reproduce, while mistakes have highlighted flaws in the peer review system. This has hit scientific credibility and prompted researchers to create new measures in order to maintain the quality of academic research and its findings.
This is particularly relevant in the UK, whose government prides itself on science-driven policy making. Policies are often drawn from behavioral research, traditionally considered a “soft science”. The head of the UK’s behavioral insights team – the “nudge unit” – argues that these days research economists can “change the world for the better”. But social scientists have debated the reliability and reproducibility of some behavioral research, prompting some to wonder whether science-driven policy has its limitations – and whether over-reliance on it can even backfire.
So leading scientists have suggested a variety of proposals to change the way that science produces knowledge. These include promoting transparency concerning research designs, incentives for more experimental repetition and enforcing the submission of a full plan of the design and analysis prior to the actual study – known as pre-registration.
It is remarkable, however, that economists have so far been content to remain so silent on this credibility crisis. It is, after all, the science that specializes in the analysis of strategic behavior and the provision of incentives to promote desirable outcomes.
Our research takes up this challenge and provides a first step in examining the theoretical effects of the proposed policies of increased transparency and monitoring on the reliability of scientific results.
Although the image of altruistic researchers working hard to discover the truth is strong in the minds of the general public, the actual process in which academic research is conducted is different. Economic theory models the various incentives of scientists, prominent among which is the desire of individuals to ascend the academic ladder.
We focus on proposals to impose transparency – which will stop researchers from committing the questionable practices which make scientific evidence difficult to interpret.
The main result of our model is that discouraging slight transgressions, such as failing to report important details of the analysis, will also reduce more severe questionable research practices such as outright data manipulation. This is because questionable research practices serve as the “steroids” of the scientific race, where the abundance of a given form of misconduct increases the incentives to engage in more extreme misconduct.
Accordingly, a policy that eradicates mild forms of misconduct also discourages the use of stronger “performance enhancers”.
We examine a setting where researchers are motivated to conduct research ethically or to maintain a good reputation, but are also concerned about being published in a limited number of top journals. The latter is crucial, as it introduces an “economic externality”.
Easing the pressure
The likelihood of an individual researcher to commit a questionable research practice depends on the behavior of other researchers: more lighter transgressions will result in a higher frequency of outright manipulation – to guarantee a unique result and the corresponding acclaim which this brings.
Therefore a transparency policy that reduces lighter transgressions does not, as might be expected at first glance, lead to more severe misbehavior. On the contrary, reducing the incidence of lighter misdemeanor will reduce competitiveness of the race to publication and thus ease the pressure of engaging in questionable practices.
Other possible policies could aim at reducing more severe transgressions – such as data fabrication – by using the relevant statistical techniques. But this could increase the rewards and frequency of lighter transgressions, making the overall effect on the reliability of scientific results unclear.
Mathematical models are especially useful when they address policy changes that are not amenable to direct experimentation. This is because it is the theory that bridges the gap between the current status quo and the proposed new one. Performing direct experiments on researcher misconduct is costly and difficult, but the potential effects of proposed reforms can still be evaluated by using economic theory.
Our model teaches us that we should feel confident that implementing the transparency proposals will help science fulfill its purpose of discovering the truth.