Some evidence the scientific incentive system works
Citation is the currency of science. New scientific discoveries build on the work that came before, and scientists “purchase” the support of that earlier work by citing it. In turn, scientists hope their own work will prove useful to others and be cited in turn. The number of citations received over a lifetime becomes a proxy for how well the scientific community values your work, and that recognition is what scientists value.
At least, that’s a simplified version of the story David Hull tells in Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. It’s an elegant story. The pursuit of citations directs effort towards problems whose solutions will be most useful to the production of new knowledge, and incentivizes scientists to promptly disclose their results and make them accessible. When it works, the system accelerates discovery.
But it can also go wrong. It’s a self-referential system: what’s valuable is what gets cited, and what gets cited is what helps you generate citations. If your discipline has a lot of people debating how many angels can dance on the head of a pin, then debating that topic becomes the way you get citations. In short, if citations are the currency of science, then the citation market is one where bubbles are theoretically possible.
That’s essentially the charge leveled against theoretical fundamental physics in Sabine Hossenfelder’s Lost in Math: How Beauty Leads Physics Astray. Hossenfelder argues that theoretical fundamental physics is stuck in a quagmire because the field has become so far removed from experimental testing that aesthetic criteria now guide theory development. Beautiful work is what gets cited, and so people work hard to develop beautiful theories, whether or not they’re true.
Similar charges have been leveled against the field of economics. In a common version of the argument, it’s an insular field obsessed with writing complicated mathematical models based on a caricature of human psychology, with the purpose of “proving” the free market is best. But if this is how academic economics really works, then to get citations in economics you need to play along, whether or not this style of work generates true knowledge.
Angrist et al. (2020) test this story by looking at how economics research is cited by other fields. If economics research doesn’t point towards any universal truths, it’s less likely to be useful to other fields - in psychology, sociology, political science, computer science, statistics, etc. - who do not share economics’ supposed ideological slant. To check this idea, they allocate journals to different disciplines, based on how frequently the articles they publish are cited by flagship field journals. They then look at the share of citations those articles make to economics and other social sciences.
(Here’s an example: the weighted share of citations from different social sciences, to economics (in black), sociology (in red), and other fields)
Fortunately, they find the utility of economics research to other fields has been improving and in many cases looks pretty favorable relative to other social sciences (for more on why this might be, see this article). For example, political science and sociology articles are more likely to cite economics research than other social sciences, as are articles in finance, accounting, statistics, and mathematics. In psychology and computer science, economics in recent years is more or less tied for first with another social sciences field. In other fields, citations to economics are lower than other social sciences, but it would be surprising if that weren’t the case at least some of the time.
Even more encouragingly, it looks like chasing citations in economics is at least partially aligned with producing work that is useful outside the field. Angrist and coauthors run a regression predicting citations from outside economics to a particular economics article as a function of article characteristics, including the number of citation an article receives from other economics papers. A 10% increase in citations from economics papers is associated with a 6% increase in citations from outside economics. Papers that get cited a lot in economics are also more likely to get cited by non-economic fields.
So the citation market in economics is not purely a bubble. But what about science more generally?
Patents, though frustratingly imperfect, provide one way of assessing the value of science to non-scientists. Marx and Fuegi (2020) use text processing algorithms to match scientific references in US and EU patents to data on scientific journal articles in the Microsoft Academic Graph. The average number of citations to scientific journal articles has grown rapidly from basically 0 to 4 between 1980 and today.
This is a bit of an encouraging vote of confidence in science. But what do these citations really mean?
Watzinger and Schnitzer (2019) have a cool paper that suggests scientific research is a wellspring of new ideas that get transformed into technology, and that these connections are well proxied by citations. They build directly on Marx and Fuegi, but characterize patents’ dependence on science in a slightly more nuanced way.
They begin by assuming patents that directly cite scientific research depend on science the most. These patents are called “D = 1” patents, meaning the “distance” to science is just one citation. Patents that cite “D = 1” patents, but not science directly, are called “D = 2” patents, indicating their distance to science is two citations (one citation to a patent that, in turn, cites a scientific article). Patents citing “D = 2” patents, but not any science or “D = 1” patents are called “D = 3” patents and so on. The idea is that the higher is “D”, the “farther” the patent is from relying on science. It’s a measure of how many links there are in the shortest citation chain between the patent and a cited scientific article. (This measure is based on another cool paper by Ahmadpoor and Jones 2017 discussed a bit more here).
Watzinger and Schnitzer then show patents with lower “D” tend to be higher value: closer to science, more valuable patent.
To do this, they need a way to measure the value of patents. There are a lot of approaches to doing this, but the one they use is based on a paper by Kogan et al. (2017). Essentially, the idea is to see what happens to the stock price of companies on the 3 days before and after they get a patent granted. Under some assumptions, you can translate this into the market’s estimated value of the patent grant. Kogan et al. (2017) shows this measure of patent value is correlated with a lot of other stuff, and it’s become a new standard way to measure the value of patents in dollar terms.
Watzinger and Schnitzer (2019) find patents with D = 1 are nearly $3mn (in 1982 dollars!) more valuable than similar patents in the same year and tech field with no connection to science! Patents with D = 2-3 are also more valuable, but the science premium declines in the way you would expect.
What is it about science that makes these patents so valuable? Watzinger and Schnitzer (2019) also scan the text of patent abstracts and look for new and unusual words - those that have not previously appeared in patent abstracts. They show these text-based measures of novelty are also associated with more value. Finally, they find patents closer to science are indeed more likely to introduce new and unusual words. Their interpretation is that science discovers new concepts, and that these concepts get spun into valuable new technologies.
So far, this is consistent with the notion that science generates knowledge that is useful for the purposes of invention, and that the use of science by patents is reasonably well proxied by citation. But it doesn’t yet resolve the question of whether the science incentivizes the creation of knowledge that is broadly useful, or just useful to other scientists.
Poege et al. (2019) provide a useful check here. They look at the value of patents that cite scientific work and compare the value of the patents to the number of citations the cited article received from other articles (i.e., not from patents).
Patents are not applied for to earn citations, but rather to turn a profit. So patent-holders aren’t playing the citation game. If the citation market in science is a bubble, then ivory tower academics chase citations by arguing how many angels can dance on the head of a pin (or whatever the current fad is). Sometimes that generates useful knowledge, but the number of citations an article receives is basically unrelated to how useful the ideas are to those outside the ivory tower. In contrast, if the citation market is functioning well, it directs scientists to discover universal truths that can be generally applicable. In that case, highly cited work is more likely to be useful to those trying to invent new technologies.
Fortunately, Poege and coauthors find the most highly cited scientific papers are indeed more likely to be cited by patents.
(In the figure above, science quality is an article’s percentile rank for citations received in the three years after publication)
Moreover, patents that cite highly cited papers are themselves more valuable by various measures (such as how often the patent is cited, or how the stock price of the patent-holder changes when the patent is granted).
And patents that do not directly cite highly cited research, but do cite patents that cite such research, are also more valuable than patents with no citation chain connecting them to highly cited scientific research.
Together these studies appear to largely rule out the worst case scenario: the citation system seems to direct research towards useful ends, where usefulness is defined by utility to those outside the immediate research community.
New articles and updates to existing articles are typically added to this site every two weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.
Hull, David L. 1988. Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science. University of Chicago Press.
Hossenfelder, Sabine. 2018. Lost in Math: How Beauty Leads Physics Astray. Hatchette Press
Angrist, Josh, Pierre Azoulay, Glenn Ellison, Ryan Hill, and Susan Feng Lu. 2020. Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship. Journal of Economic Literature 58(1): 3-52. https://doi.org/10.1257/jel.20181508
Marx, Matt, and Aaron Fuegi. 2020. Reliance on science: Worldwide front-page patent citations to scientific articles. Strategic Management Journal 41(9): 1572-1594. https://doi.org/10.1002/smj.3145
Watzinger, Martin, and Monika Schnitzer. 2019. Standing on the Shoulders of Science. CEPR Discussion Paper No. DP13766. https://ssrn.com/abstract=3401853
Kogan, Leonid, Dimitris Papanikolaou, Amit Seru, and Noah Stoffman. 2017. Technological Innovation, Resource Allocation, and Growth. The Quarterly Journal of Economics 132(2): 665-712. https://doi.org/10.1093/qje/qjw040
Poege, Felix, Dietmar Harhoff, Fabian Gaessler, and Stefano Baruffaldi. 2019. Science quality and the value of inventions. Science Advances 5(12) eaay7323. https://doi.org/10.1126/sciadv.aay7323