In empirical research, it is common for studies on the same topic to produce vastly different results. Take, for example, the debate on minimum wage policies: while many studies conclude that raising the minimum wage reduces the employment of affected workers, others find no or even a positive impact on employment. Similar discrepancies can be found in research on the impact of immigration on the economic opportunities of native workers.
These differences are not purely random but are often driven by the methodological choices researchers make. These include the selection of data, the definition of key variables, and the statistical methods employed. A less visible but equally important factor is the personal stance of researchers on the issue they are studying. Ideological beliefs can subtly influence how studies are designed, analyzed, and interpreted.
71 teams, 158 researchers, 1,253 regression models
A recent IZA Discussion Paper by George J. Borjas and Nate Breznau sheds light on this phenomenon by exploring how researcher ideology shapes empirical findings. The study utilized a unique experimental setup involving 71 research teams and 158 researchers who analyzed the same dataset. Their task was to determine whether immigration influences public attitudes toward welfare programs.
Each team independently decided how to approach the analysis, including how to select data samples, define key variables, and specify statistical models. This led to 1,253 different regression models, each producing distinct results. The findings revealed significant ideological bias: researchers with pro-immigration views reported more positive impacts of immigration on social cohesion, while those with anti-immigration views leaned toward more negative conclusions. These differences were largely driven by methodological choices.
Interestingly, teams with extreme ideological stances—whether pro- or anti-immigration—tended to produce lower-quality models, as evidenced by peer referee scores. Conversely, moderate teams were more likely to adopt higher-quality research designs, producing results that garnered higher peer evaluations.
Policy-relevant research particularly prone to bias
The study also highlights broader challenges in empirical research. Researchers often face time and resource constraints, which may lead them to focus on results that align with their internal narratives rather than thoroughly exploring alternative models. This is particularly relevant in policy-driven research, where researchers who are deeply invested in specific policy outcomes may inadvertently amplify ideological biases.
While the ongoing revolution in Artificial Intelligence (AI) reduces the cost of conducting research, it also introduces new risks of embedding biases in algorithms. However, AI also offers opportunities to identify ideological bias, potentially improving the objectivity and credibility of future research.