• Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar

IZA Newsroom

IZA – Institute of Labor Economics

  • Home
  • Archive
  • Press Lounge
  • DE
  • EN

Mark Fallak

A data tax for a digital economy

October 23, 2018 by Mark Fallak

The discussion of how to tax the digital economy is in full swing as established principles, such as that of “economic allegiance”, “value-added”, “sales taxes” etc. prove inadequate for taxing companies whose headquarters are in one country but whose profit is made by offering online services to users across the globe.

Where does the company consume publicly-provided goods, and hence where should taxes on it be applied? What is a suitable definition of sales tax when the users of a data-driven internet platform use it without monetary fees? What is a suitable definition of value-added when users are not customers of a platform but whose transaction data on the platform is being harvested for profit?

In the past year German Chancellor Angela Merkel and the leader of the German Social Democratic party Andrea Nahles mentioned planned reforms which will try to “tax big data”. While Merkel’s remarks appear to imply monetary taxes, Nahles’ remarks come closer to a data tax in the form of opening data as a common good very much like a proposal in IZA World of Labor article in 2015 describing the properties of Google Trends data:

“… governments will have to encourage or even legislate for some kind of corporate good practice (for example, in the form of a data tax) to motivate firms with large amounts of data in their proprietary silos to open up the data in aggregate form for the benefit of society, while also protecting their legitimate corporate interests and privacy concerns”.

Nahles proposes “data-for-all-legislation” to force a digital firm to open “an anonymized, representative sample of their data” as soon as “the firm’s market share exceeds a certain limit for a certain amount of time”. This idea is very interesting and policy makers at last facing up to the data revolution is long overdue. While the data part of Nahles’ proposal comes close to a “data tax” proposal, the trigger mechanism is reminiscent of the “NBA draft lottery” which strengthens weaker competitors.

Digitization creates online markets because ICT optimizes the core function of markets: the matching of supply and demand. On occasion so-called “network effects” are essential in optimizing matching. In order, for example, for Uber to match passengers to taxis, it makes use of the fact that it can analyze Big Data to predict when and where the demand will occur. This is associated with winner-take-all phenomena, such as Google dominating the information market, Facebook the private social media market, LinkedIn the professional social media market, etc.

Digital firms in these cases can be seen as “data refineries” of raw data drawn from a country’s human capital (which could be seen as a public good very much like roads and bridges) to produce profitable services. It is therefore natural to think about taxes, but it is more important to pay a “data tax” on top of which others might build competing, complementary, new or public good services. Google’s Trends data might serve as a prototype.

While the search and click microdata remains proprietary (as one might argue it should), aggregate data derived from those data sources (e.g., Google Trends) can help analyze, understand and predict socioeconomic phenomena. This is clearly a valuable public good which in a further twist shows why Google’s self-imposed data-taxing is a clever move. The total knowledge derived from academic research with Google Trends data can be used by the proprietor to improve its search algorithm. In a data dominated economy we need more such intelligence and a properly designed data tax is a strong candidate as a Mechanism to encourage this.

Filed Under: Opinion Tagged With: data tax, digitalization, globalization, Internet

The Tyranny of the Top Five

October 20, 2018 by Mark Fallak

Too often in economics, where you publish can be more important than what you publish.

That’s the theory explored in a new study co-authored by Nobel-winning IZA fellow James J. Heckman and Sidharth Moktan. The University of Chicago scholars found that tenure and prize committees often base decisions on how often candidates publish in “top five” journals in the field. That practice not only concentrates career advancement into the hands of a select set of editors—many of whom are long-serving—but does so at the expense of innovative economic research.

“Relying on publication counts in ‘top-ranked’ journals encourages crass careerism among young economists,” Heckman said. “It diverts their attention away from basic research toward blatant strategizing about lines of research and favored topics of journal editors with long tenures.

“Relying on rankings rather than reading to promote and reward young economists subverts the essential process of assessing and rewarding pathbreaking original research.”

The “top five” refers to the leading economic journals most crucial to the academic and professional success of young scholars: The American Economic Review, Econometrica, the Journal of Political Economy, The Quarterly Journal of Economics and The Review of Economic Studies. These journals are chosen by a process that weighs citation counts to all papers in the journal; in other words, it judges a paper by the company it keeps.

The IZA discussion paper by Heckman and Moktan found that scholars who have written one “top five” article are 90 percent more likely to receive tenure in a given year. Those figures balloon to 260 percent for two such articles and 370 percent for three.

“Both junior and senior scholars often bring up the top five when they’re evaluating somebody,” Moktan said. “Even if it’s in a casual conversation, they’ll say, ‘Oh, how many ‘top fives’ do they have?'”

Gauging the role of bias

Writing for a “top five” journal involves more than just producing the best possible piece of research, according to Heckman and Moktan. Their study argues that to optimize chances for placement, scholars are incentivized to tailor their work for individual editors—who, consciously or not, are guided by their own biases.

In addition to tracking tenure rates, Heckman’s and Moktan’s study tracks the author affiliation in “top five” journals from 2000-16. For example, Heckman sits on the editorial board of the Journal of Political Economy, which is published by the University of Chicago Press. Perhaps unsurprisingly, it drew 14.3 percent of its articles in the aforementioned time period from authors connected to the University of Chicago. The Quarterly Journal of Economics, which is edited at Harvard University’s Department of Economics, drew nearly a quarter of its articles (24.7 percent) from its own affiliates, plus another 13.9 percent of its articles from Massachusetts Institute of Technology affiliates.

In contrast, the Review of Economic Studies, which has a higher rate of turnover on its editorial board, has much weaker ties to individual universities. From 2000-16, the publication was most strongly connected to New York University and Northwestern University affiliates, from whom it drew 7.3 percent and 7.0 percent of articles.

The scope of the problem expands when universities use “top five” journals as a proxy for determining tenure. No longer is this simply an issue of who gets published in certain journals, the authors claim; rather, flaws in the editorial process are amplified into career hurdles—ones that can be difficult for outsiders to surmount without connections to “top five” editors and the referees they select.

The “top five” also don’t hold a monopoly on high-quality work. According to Heckman and Moktan, some of the most influential work in economics is published by other journals. Although the “top five” articles produce more citations on average, those numbers are skewed by outliers. Moreover, the senior scholars who rely on the “top five” to judge their colleagues often do not publish in those journals themselves once they are tenured. Relying on the journals, Heckman said, instills caution rather than creativity in young scholars anxious to gain tenure.

The “top five” journals incentivize scholars to focus on follow-up and replication work—research that is easy to assess for immediate publication, but which does not advance the frontiers of economic science. Often, the sorts of innovative projects that would challenge accepted ideas are too long or data-intensive to fit into the format of “top five” journals.

“Research is inherently risky, because you’re trying to find answers to questions that have not been solved,” Moktan said. “Sometimes the answers aren’t exciting. But serious assessments require senior scholars to read papers and understand them, and why that line of research is important.”

Potential solutions

Heckman and Moktan suggest that tenure committees devote more resources to closely reading published and unpublished papers, rather than relying on journal reputation as a substitute for careful reading. That method could prompt each individual institution to pursue more unconventional research instead of leaning on papers funneled through “top five” publications. Expanding the pool of influential publications beyond five journals could ameliorate the problem too.

They also suggest a more radical solution: Shifting away from conventional journals in favor of open-source formats such as arXIV and PLOS ONE, which are used in hard sciences. Such a change would offer scholars an opportunity to share their ideas earlier and get peer review in real time—an approach that might be more welcoming to out-of-the-box ideas.

“The current system of publication and reward does not encourage creativity,” Heckman said. “It delays the publication and dissemination of new ideas. It centralizes power in the hands of a small group of editors, prevents open discussion and stifles dissent and debate. It needs to be changed.”

Editor’s note: This is a slightly edited version of a post by University of Chicago News.

Filed Under: Research Tagged With: academic career, economic research, economics journals, editors, open access, tenure

Education as a source of inequality

October 17, 2018 by Mark Fallak

How do we have to organize our educational systems against a rapidly changing labor market environment? Educational decisions of parents and students are taken against changing skill requirements, and new labor market policies have potentially unintended consequences for education decisions. School choice, teachers’ efficiency, and school financing were some of the topics discussed at the 3rd IZA Workshop on the Economics of Education, which brought together 21 international scholars to present their research at IZA in Bonn.

A keynote speech by Susan Dynarski highlighted the current knowledge on how inequality and educational outcomes are interrelated. While much has changed for the better over the past decades, socioeconomic background remains an important determinant of student success – at every stage of education. Informational and financial constraints are only slowly being overcome, although recent research points to promising solutions and interventions.

For example, Andres Barrios Fernandez presented his work in which he demonstrates how older direct neighbors and siblings receiving student loans increase the probability of younger neighbors or siblings to go to university. The effect appears to work through reducing informational disadvantages about loan eligibility and the application process, and this highlights the scope for spillovers of supporting individuals in poor neighborhoods in their college application process.

Timothy N. Bond analyzed how teacher performance pay linking salaries to measurable increases in student performance leads to better longer-term labor market success of those students being exposed to such programs. Cohorts with more students taught by teachers who were paid by performance are more likely to graduate from high school and earn higher wages as adults. The effect appears to be especially driven by primary schools with a higher fraction of disadvantaged students. This provides a direct link of teacher performance pay to inequality.

View the full conference program.

Filed Under: Research

Challenging the use of the twin instrument in the social sciences

October 11, 2018 by Mark Fallak

By Sonia Bhalotra and Damian Clarke

Twins have intrigued humankind for more than a century. Twins are not as rare as we may think: 1 in 80 live births and hence 1 in 40 newborns is a twin, and the trend is upward. In behavioral genetics, demography and psychology, monozygotic twins are studied to assess the importance of nurture relative to nature. In the social sciences, twin births are also used to denote an unexpected increase in family size which assists causal identification of the impact of fertility on investments in children and on women’s labor supply. A premise of studies that use twin differences or the twin instrument is that twin births are quasi-random and have no direct impact (except through fertility) on the outcome under study.

Our recent IZA discussion paper (forthcoming in the Review of Economics and Statistics) presents new population-level evidence that challenges this premise. Using almost 17 million births in 72 countries, we show that the likelihood of a twin birth varies systematically with maternal condition. In particular, our estimates establish that mothers of twins are selectively healthy. We document that this association is meaningfully large, and widespread- that it is evident in richer and poorer countries, and that it holds for sixteen different markers of maternal condition including health stocks and health conditions prior to pregnancy (height, obesity, diabetes, hypertension, asthma, kidney disease, smoking), exposure to unexpected stress in pregnancy, and measures of the availability of medical professionals and prenatal care.

We also show that a positive association of the chances of having twins with health-related behaviors in pregnancy (healthy diet, smoking, alcohol, drug consumption), although we do not rely upon this because behaviors in pregnancy may reflect a response to the mother’s knowledge that she is carrying twins.

Differences between mothers of twins and singletons

Previous research has documented that twins have different endowments from singletons, for example, twins are more likely to have low birth weight and congenital anomalies. We focus not on differences between twins and singletons but rather on differences between mothers of twins and singletons, which indicate whether occurrence of twin births is quasi-random. It is known that twin births are not strictly random, occurring more frequently among older mothers, at higher parity and in certain races and ethnicities, but as these variables are typically observable, they can be adjusted for. Similarly, it is well-documented that women using artificial reproductive technologies (ART) are more likely to give birth to twins but ART-use is recorded in many birth registries, and so it can be controlled for and a conditional randomness assumption upheld.

The reason that our finding is potentially a major challenge is that maternal condition is multi-dimensional and almost impossible to fully measure and adjust for. To take a few examples, fetal health is potentially a function of whether pregnant women skip breakfast, whether they suffer bereavement in pregnancy,  or exposure to air pollution.

Our underlying hypothesis is that twins are more demanding of maternal resources than singletons and, as a result, conditions that challenge maternal health are more likely to result in miscarriage of twins than of singletons. We discuss the role of alternative mechanisms including non-random conception and maternal survival selection. We provide evidence in favor of the selective miscarriage mechanism using US Vital Statistics data for 14 to 16 million births.

Selective miscarriage is similarly the mechanism behind the stylized fact that weaker maternal condition is associated with a lower probability of male birth. We confirm this in our data, showing that twin births are more likely to be female.

Controlling for maternal health conditions

Our findings add a novel twist to a recent literature documenting that a mother’s health and her environmental exposure to nutritional or other stresses during pregnancy influence birth outcomes, with many studies documenting lower birth weight. If birth weight is the intensive margin, we may think of miscarriage as an extensive margin response, or the limiting case of low birth weight.

Our findings have implications for research that has exploited the assumed randomness of twin births. No previous study has attempted to control for maternal health conditions or behaviors. Studies using twins to isolate exogenous variation in fertility will tend to under-estimate the impact of fertility on parental investments in children, and on women’s labor supply if selectively healthy mothers invest more in children post-birth, and are more likely to participate in the labor market.

This is pertinent as it could resolve the ambiguity of the available evidence on the impacts of fertility. In particular, recent studies using the twin instrument challenge a long-standing theoretical prior in rejecting the presence of a quantity-quality (QQ) fertility trade-off in developed countries, but our estimates suggest that this rejection could in principle arise from ignoring the positive selection of women into twin birth. Similarly, research using the twin instrument tends to find that additional children have relatively little influence on women’s labor force participation. But, again, these estimates are likely to be downward biased.

The results of studies in Economics, Psychology, Education and Biology that instead exploit the genetic similarity of twins will not be biased but will tend to have more restricted external validity than previously assumed.

Filed Under: Opinion, Research Tagged With: fertilty, maternal health, natural experiment, randomness, social sciences, twin instrument, twins, validity

Did the Internet displace social capital?

October 4, 2018 by Mark Fallak

Starting from Adam Smith, economists have long pondered the role of networks, values, civic engagement and trust – often grouped together under the umbrella concept of social capital – in the economic activity. Many studies show that countries and regions with low levels of social capital tend to lag in development and growth. Indicators of social capital, however, have been reportedly declining in high-income countries in the last decades, especially concerning civic engagement and political participation.

In his bestseller Bowling Alone, Robert Putnam suggested that television and other forms of domestic entertainment such as video games probably replaced relational activities in individuals’ leisure time. If television, a unidirectional mass medium, can displace social capital, it stands to reason that the Internet, which provides on-demand content and allows for interactive communication, might induce an even more powerful substitution effect.

Despite the economic outcomes of social capital and the pervasiveness of the Internet, only a few studies in economics empirically analysed the impact of Internet use on social capital. Does the time we spend online displace our civic engagement and political participation? Is the Internet weakening our social ties making us less connected than before? How does the social capital of the economy react to the development and spreading of new information and communication technologies?

A new IZA discussion paper by Andrea Geraci, Mattia Nardotto, Tommaso Reggiani and Fabio Sabatini answers these questions using new data from the UK. The authors study how the introduction of high-speed Internet affected the social capital of the Britons. This is a tricky issue to deal with because endogenous sample selection and treatment assignment make it difficult to establish whether broadband penetration and social capital are connected by a causal relationship or just spuriously correlated.

For example, the purchase of a fast Internet connection and aspects of social capital such as civic engagement may be codetermined by unobservable personality traits. Reverse causality is also at stake, as more socially active individuals may have a stronger propensity for using the Internet as a tool to preserve and extend their offline relationships.

The broadband infrastructure

To overcome these problems, the authors match information about the topology of the UK telephone network – including the geolocation of its nodes and of the blocks served by each of them – with geocoded longitudinal data taken from the British Household Panel Survey (BHPS). The resulting dataset allows calculating the distance of the individual telephone line possessed by each BHPS respondent from the respective node of the voice network. Such a distance was a key factor of the access to fast Internet in the early years of broadband penetration.

Until the second half of the 2000s, in fact, broadband Internet was mostly based on the digital subscriber line (DSL) infrastructure, which allows for the transmission of data over the old telephone wires made of copper. However, the speed of a connection rapidly decays with the distance of a final user’s telephone line from the node of the network serving the area, also called “local exchange” (LE).

While at the time the network was designed in the 1930s the length of the copper wire connecting houses to the LE (also called local loop) did not affect the quality of voice communications, the introduction of DSL technology in the 1990s unpredictably turned distance from the LE into a key determinant of the availability of fast Internet, thereby creating exogenous discontinuities in broadband penetration. Proximity to the respective node of the network thus resulted in access to fast Internet, while more distant dwellings were de facto excluded from accessing the broadband.

Broadband Internet and social capital

The authors’ results paint a complex picture. They find that, after the advent of the broadband in the area, several indicators of social capital started to decrease with proximity to the node of the network, suggesting that the exposure to fast Internet displaced some dimensions of social capital, but not all of them. There is no evidence that broadband access displaced routine interactions such as meetings with friends.

However, fast Internet crowded out forms of cultural consumption that are usually enjoyed in company such as watching movies at the cinema and attending concerts and theatre shows. In addition, broadband penetration significantly displaced civic engagement and political participation, i.e. time consuming activities that usually take place during leisure time, are not pursued in order to reach particularistic goals, and generally relate to a non self-interested involvement in public affairs.

Associational activities have been often mentioned as a form of bridging social capital creating positive societal and economic externalities, and the finding in this paper suggests an explanation for their reportedly declining trend.

The developing role of fast Internet use, however, certainly calls for further investigation, as social media dramatically changed the role of Internet use. A more recent wave of Internet studies suggests that social media may also support collective action and political mobilization, especially in young democracies and authoritarian regimes, thereby providing a potentially positive contribution to the strengthening of political participation.

Other studies, on the other hand, highlight how the increasing importance of social media in the public discourse entails new systemic risks, connected to the propagation of misinformation, the extreme polarization of the political debate and the spreading of online incivility. Future research should deal with these conflicting effects, also in light of the prominent role that a limited number of platforms, such as Facebook and Twitter, assumed in biasing results of the 2016 US presidential election and of the Brexit referendum.

Filed Under: Research Tagged With: broadband internet, civic engagement, networks, political participation, social capital, Trust

Matching workers and jobs online

September 28, 2018 by Mark Fallak

Market transactions, including the labor market, take place online because information and communications technology naturally optimizes the main purpose of markets: the matching of supply and demand. At the same time, it seamlessly documents these transactions so that studying and understanding markets depends heavily on access to such transaction data.

How to leverage the internet as a data source of social science, and labor economics in particular, is the main research mission of IDSC, IZA’s research data center.

Organized by Nikos Askitas and Peter Kuhn, a two-day workshop brought together economists and computer scientists from academia and practice to showcase research with data from internet job boards, one of the main modes of matching facilitation in labor markets worldwide today.

Experimenting with job boards

Online job boards can be used to perform randomized controlled trials (RCT) in a cost-efficient manner. RCTs are among the most rigorous methods to measure the control of an intervention in the labor market setting by using placebo and control groups to improve measurement accuracy and reliability. Keynote speaker Michèle Belot and Robert Mahlstedt presented papers with RCTs involving the UK Universal Jobmatch website and Jobnet, the public website for all jobseekers and employers in Denmark, respectively. The first paper redesigned the standard job search web interface by providing tailored advice and measuring the effect of the intervention while the latter designed online tools aimed at improving the understanding, by the unemployed, of the 2017 unemployment benefits reform in Denmark.

Signaling in the hiring process

When firms hire they have a horizontal and a vertical dimension along which to search for workers. The horizontal dimension involves the various skills required while the vertical dimension involves the quality of the worker they are seeking. While, when involving hard skills, the horizontal dimension is straight forward the vertical is harder to get a handle on. John Horton worked with data from job board oDesk (now part of upwork) to investigate whether or not matching between workers and firms is improved both in terms of efficiency (number of applications until match occurs) and in terms of quality (hours worked after match occurred) if employers signal along the vertical dimension the level they are willing to hire (i.e. by revealing they are seeking Entry Level, Intermediate or Expert quality). The paper finds this to be the case particularly for the lower end of the spectrum.

Corporate culture and firm performance

Stefan Pasch web scraped 550,000 employee reviews of a number of firms from glassdoor.com and using text analysis techniques constructed a measure of corporate culture for each firm. He then showed that “firms that differ strongly from the average culture of their industry show worse firm performance, supporting the hypothesis that a culture should fit to its business environment.” Moreover, he finds that “suboptimal culture choices can be partly explained by CEO characteristics, while regional culture only plays a minor role.”

Finally, besides a number of other interesting presentations and consistent with the workshop’s aim to bring academics and practitioners together, noteworthy research and data were presented by Bledi Taska (Burning Glass Technologies), Kristin Keveloh (LinkedIn) and Martha Gimbel (Indeed Hiring Lab).

For a list of all presented papers, see the workshop program. The second issue of this workshop will take place on September 21-22, 2019, in Bochum, Germany, in cooperation with the Center for Advanced Internet Studies.

Filed Under: IZA News, Research Tagged With: Internet, job search, matching, online job boards

Primary Sidebar

© 2013–2026 Deutsche Post STIFTUNGImprint | Privacy PolicyIZA