Do we produce too much research?
A week ago, I embarked on a project to explore why it is that there are more and more researchers and scientists in the world, publishing more and more findings, yet productivity growth in the economy is not accelerating. In a sense, while more and more resources are devoted to research, the productivity of research is declining. One reason for this development could be that we simply produce too much research.
Academics today have much less freedom than fifty years ago. In an effort to hold academics accountable for their work, universities and other research institutions now measure the number of papers a researcher publishes and the quality of these papers. Quality is usually measured by the impact factor of the peer-reviewed journal the paper is published in. As a result, young academics who strive to get a professorship aim to publish as many papers in as many peer-reviewed journals as possible. Reducing the number of publications or publishing in lower-rated or non-peer-reviewed journals becomes a waste of time and possibly a hindrance to career progression.
This has two unintended consequences. First, academics are human and thus they have limited time and bandwidth to digest new research findings. Hence, the more papers are published in a field, the more competition there is for the attention of the leading lights in a field. This means that on average, every new idea has a lower probability of being vetted and reflected on by researchers in the field. And thus, the probability of a new idea influencing the future direction of research is reduced as well.
The second unintended consequence is that if there are more papers published the quality of the vetting process declines. Peer reviewers in academic journals are usually professional researchers in the field and are not paid to do the review work. This in itself is a major problem and there is increasing evidence that the peer review process does not increase the quality of research. If the number of papers submitted for peer review increases, the peer reviewers themselves become overloaded and have to resort to simple heuristics to assess the novelty and quality of new research findings. The result is that papers that build on existing canonical papers in a field are more likely to get published while papers that criticise or even refute the consensus in a field are more likely to get rejected. Over time, a field becomes calcified, only rehashing the same ideas over and over again.
Johan Chu and James Evans analysed 1.8 billion papers in major fields ranging from engineering and physics to computer science, chemistry and medicine and found that both unintended consequences influence the progress of research. The chart below shows the trendline for the Gini coefficient of citations and the year-on-year correlation of citations as the number of papers in a field grows.
Changes in citation dynamics by the size of a research field
Source: Chu and Evans (2021)
In essence, the chart on the left (Gini coefficient) measures how unequal citations are in a field. The higher the Gini coefficient the more unequal the attention to specific papers in a field. Some highly influential ‘canonical’ papers get cited all the time while the rest gets ignored. The chart on the right, on the other hand, measures how persistent the citations are. Higher numbers mean that authors cite always the same papers over time and there is less and less influence from new papers with new ideas.
But the result of these two trends is that research becomes less and less productive. There may be lots of new ideas that could move science and technology forward, but they drown among all the good and bad ideas put forward in general and are ignored by an establishment of high-profile leading academics in a field.
This is something I and many of my colleagues have experienced in the field of finance and economics as well. Try to argue that modern portfolio theory or the CAPM is flawed and you will be rebuked by die-hard believers in these theories. Try to argue that monetarist theories of inflation are a dead horse not worth riding and you will create resistance. Indeed, the current new trend towards explaining inflation with a fiscal theory of the price level was in my view possible because the theory uses the same techniques and tools as existing theories. But as I have argued here, these fundamental assumptions are flawed from the get-go and as long as we use them, no theory will be able to explain inflation.
Unfortunately, while we can diagnose the problem with modern research efforts, finding a solution is much harder. We obviously can’t ask academics to produce less research because that would reduce their chances of career progression. An alternative could be to create journals for failed experiments to be able to publish results that showed no significant effect where there should be one. Also, one could promote more journals that publish results that are out of the mainstream. Or one could do away with the peer review process altogether since it has proven ineffective. After all, Albert Einstein and Isaac Newton never had any of their papers peer-reviewed because peer reviews didn’t exist back then. Maybe open discussion in academies and on the internet is a better and more productive way to vet new ideas?
Who knows if we will be able to solve this challenge at all? For now, all we know is that research productivity seems to decline partially because we produce too much research.