Although empirical economics often claims to provide rigorous estimates of behavioral parameters which must be judged against a gold standard of experimental evidence, a key feature of this method is typically neglected—the role of replications in assessing the reliability of estimates and, importantly, detecting fraudulent and erroneous research.
Given the large impact of economic research on policy-making, errors or fraud can have far-reaching consequences. An infamous recent example is the study “Growth in a Time of Debt” by Harvard economists Carmen Reinhart and Ken Rogoff. Invoked to justify strict austerity as a way to stimulate economic growth in the U.S. Republicans’ budget proposals, their results were shown to be driven by a simple mistake in Excel.
Making replications mandatory in curricula would be useful
German researchers tend to agree with the notion that replication studies are important and worthwhile. Nonetheless, as yet they have not been willing to devote significant time to conduct such studies, often due to the low chances of publication. This is one of the key findings from a survey among 300 German researchers from various disciplines conducted by economists Benedikt Fecher (DIW), Mathis Fräßdorf (DIW) and Gert G. Wagner (DIW & IZA), which was recently published as IZA Discussion Paper No. 9896.
The authors characterize this situation as a typical “tragedy of the commons:” Every researcher knows that replications are useful, but most people count on others to conduct them. If replications are done at all, they are typically only used for teaching purposes and doctoral theses. This situation points to the need for providing new incentives for these types of studies to increase their attractiveness for researchers, e.g., through better publication possibilities or specific funding supporting this type of research.
Research parasites are beneficial for the organism as a whole
Replications in general depend on the availability of data from the original works. The view that data should be universally available for this purpose is not undisputed though. In a second contribution, which appeared as IZA DP No. 9895, Wagner and Fecher point to a recent discussion in medical science about “research parasites,” or researchers who primarily work with secondary data instead of engaging in original data collection.
Against this background, the authors highlight the need for new instruments of credit that allow researchers who engage in original data collection to boost their reputation status if their work is used in secondary analyses. A culture of citation of data sets, bestowing awards for the best data sets and data collection, is proving to develop as an important and prominent factor in the overall assessment of scientific research standards.
IZA’s CEO Hilmar Schneider underscores these arguments, explaining:
Up to now, the effort put into doing a replication has been rewarded unequally. While the chances of getting a replication published which uncovers serious errors in a previous study are quite high, the chances for a replication which simply reproduces existing findings are close to zero. Therefore, researchers doing replications are facing a substantial risk of wasting their time, which has the consequence of effectively preventing scientific quality control. It’s high time to give equal credit to any replication, no matter whether it falsifies or reproduces existing find