Solving "More Research Needed"
Suppose we think there should be more research on some topic: asteroid deflection, the efficacy of social distancing, building safe artificial intelligence, etc. How do we get scientists to work more on the topic?
One approach is to just pay people to work on the topic. Capitalism!
The trouble is, this kind of approach can be expensive. To estimate just how expensive, Myers (2020) looks at the cost of inducing life scientists to apply for grants they would not normally apply for. His research context is the NIH, the US’ biggest funder of biomedical science. Normally, scientists seek NIH Funding by proposing their own research ideas. But sometimes the NIH wants researchers to work on some kind of specific project, and in those cases it uses a “request for applications” grant. Myers wants to see how big those grants need to be to induce people to change their research topics to fit the NIH’s preferences.
Myers has data on all NIH “request for applications” (RFA) grant applications from 2002 to 2009, as well as the publication history of every applicant. RFA grants are ones where NIH solicits proposals related to a prespecified kind of research, instead of letting investigators propose their own topics (which is the bulk of what NIH does). Myers tries to measure how much of a stretch it is for a scientist to do research related to the RFA by measuring the similarity of the text between the RFA description and the abstract of each scientist’s most similar previously published article (more similar texts contain more of the same uncommon words). When we line up scientists left to right from least to most similar to a given RFA, we can see the probability they apply for the grant is higher the more similar they are (figure below). No surprise there.
Myers can also do the same thing with the size of the award. As shown below, scientists are more likely to apply for grants when the money on offer is larger. Again, no surprise there.
The interesting thing Myers does is combine all this information to estimate a tradeoff. How much do you need to increase the size of the grant in order to get someone with less similarity to apply for the grant at the same rate as someone with higher similarity? In other words, how much does it cost to get someone to change their research focus?
This is a tricky problem for a couple reasons. First, you have to think about where these RFAs come from in the first place. For example, if some new disease attracts a lot of attention from both NIH administrators and scientists, maybe the scientists would have been eager to work on the topic anyway. That would overstate the willingness of scientists to change their research for grant funding, since they might not be willing to change absent this new and interesting disease. Another important nuance is that bigger funds attract more applicants, which lowers the probability any one of them wins. That would tend to understate the willingness of scientists to change their research for more funding. For instance, if the value of a grant increases ten-fold, but the number of applicants increases five-fold, then the effective increase in the expected value of the grant has only doubled (I win only a fifth as often, but when I do I get ten times as much). Myers provides some evidence that the first concern is not really an issue and explicitly models the second one.
The upshot of all this work is that it’s quite expensive to get researchers to change their research focus. In general, Myers estimates getting one more scientist to apply (i.e., getting one whose research is typically more dissimilar than any of the current applicants, but more similar than those who didn’t apply) requires increasing the size of the grant by 40% or nearly half a million dollars over the life of a grant!
Given that price tag, maybe a better approach is to try and sell scientists on the importance of the topic you think is understudied. Academic scientists do have a lot of discretion in what they choose to study; convince them to use it on the topic you think is important!
The article “Gender and what gets researched” looked at some evidence that personal views on what’s important do affect what scientists choose to research: women are a bit more likely to do female-centric research then men, and men who are exposed to more women (when their schools go coed) are more likely to do gender-related research. But we also have a bit of evidence from other domains that scientists do shift priorities to work on what they think is important.
Perhaps the cleanest evidence comes from Hill et al. (2021), which looks at how scientists responded to the covid-19 pandemic. In March 2020, it became clear to practically everyone in the world that more information on covid-19 and related topics was the most important thing in the world for scientists to work on. The scientific community responded: by May 2020 and through the rest of the year, about 1 in every 20-25 papers published was related to covid-19. And I don’t mean 1 in every 20-25 biomedical papers - I mean all papers!
This was a stunning shift by the standards of academia. For comparison, consider Packalen and Bhattacharya (2011), which looks at how biomedical research changed over the second-half of the twentieth century. Packalen and Bhattacharya classify 16 million biomedical publications, all the way back to 1950 and look at the gradual changes in disease burden that arise due to the aging of the US population and the growing obesity crisis. As diseases associated with being older and more obese became more prevalent in the USA, surely it was clear that those diseases were more important to research. Did the scientific establishment respond by doing more research related to those diseases?
Sort of. As diseases related to the aging population become more common, the number of articles related to those diseases does increase. But the effect is a bit fragile - it disappears under some statistical models and reappears in others. Meanwhile, there seems to be no discernible link between the rise of obesity and research related to diseases more prevalent in a heavier population.
Further emphasizing the extraordinary pivot into covid-related research, most of this pivot preceded changes in grant funding. The NIH did shift to issuing grants related to covid, but with a considerable lag, leaving most scientists to do their work without grant support. As illustrated below, the bulk of covid related grants arrived in September, months after the peak of covid publications (the NSF seems to have moved faster).
On the one hand, I think these studies do illustrate the common-sense idea that if you can change scientists beliefs about what research questions are important, then you can change the kind of research that gets done. But on the other hand, the weak results in Packalen and Bhattacharya (2011) are a bit concerning. Why isn’t there a stronger response to changing research needs, outside of global catastrophes?
I would point to two challenges to swift responses in science; these are also likely reasons why Myers (2020) finds it so expensive to induce scientists to apply for grants they would not normally apply for. Both reasons stem from the fact that a scientific contribution isn’t worth much unless you can convince other scientists it is, in fact, a contribution.
The first challenge with convincing scientists to work on a new topic is there need to be other scientists around who care about the topic. This is related to the model presented in Akerlof and Michaillat (2018). Akerlof and Michaillat present a model where scientists’ work is evaluated by peers who are biased towards their own research paradigms. They show that if favorable evaluations are necessary to stay in your career (and transmit your paradigm to a new generation), then new paradigms can only survive when the number of adherents passes a critical threshold. Intuitively, even if you would like to study some specific obesity-related disease because you think it’s important, if you believe few other scientists agree, then you might choose not to study it, since it will be such a slog getting recognition. There’s a coordination challenge - without enough scholars working in a field, scholars might not want to work in a field. (This paper is also discussed in more detail here)
The second challenge is that, even if there is a critical mass of scientists working on the topic, it may be hard for outsiders to make a significant contribution. That might make outsiders reluctant to join a field, and hence slow its growth. We have a few pieces of evidence that this is the case. Hill et al. (2021) quantify research “pivots” by looking at the distribution of journals cited in a scientists career and then measuring the similarity of journals cited in a new article to the journals cited in the scientist’s last three years.
For example, my own research has been in the field of economics of innovation and if I write another paper in that vein, it’s likely to cite broadly the same mix of journals I’ve been citing (e.g., Research Policy, Management Science, and various general economics journals). Hill and coauthors’ measure would classify this as being a minimum pivot of close to 0. I also have written about remote work, and that was a bit of a pivot for me; the work cited a lot of journals in fields I didn’t normally cite up until this point (Journal of Labor Economics, Journal of Computer-Mediated Communication, but also plenty of economics journals). Hill and coauthors’ measure would classify this as an intermediate pivot, greater than 0 but a lot less than 1. But if I were to completely leave economics and write something on the biology of covid-19, I might not cite any journals I’ve ever cited before; that would be measured as a the pivot maximum of 1. By this measure, most covid-related research involved a much bigger pivot than average.
Hill and coauthors then look to see what is the probability a given paper is in the top 5% for citations received in its field and year.1 The greater the pivot of the paper, the less likely a paper is to be highly cited.
Arts and Fleming (2018) provides some additional evidence on the difficulty of outsiders making major intellectual contributions, but among inventors instead of academics. As a simple measure of inventors entering new fields, they look at patents that are given a technology classification that has never been given to the inventor’s previous patents. As with Hill et al. (2021) they find these patents tend to receive fewer citations. One thing I quite like about this paper though is that they also go beyond citations and look at alternative measures of the value of a patent, such as whether the inventor or assignee chooses to pay the renewal fees to keep the patent active. By this measure too, patents from outsiders are less valuable. (This paper is discussed a bit more here)
Science is pretty competitive and if it’s harder to do valuable work in a new field, then it may well be in any given scientist’s best interest to stay in their lane. But that can make it hard for the system overall to respond to changing research needs.
While the above barriers make it harder for new scientific fields to emerge, clearly it does happen. To close, let’s look at two factors that might make change easier.
First, if you can initially solve the coordination challenge of getting a critical mass of scholars to focus attention on a new topic, then you can create a new equilibrium where pivoting into that field can be in any given individual’s self-interest. Covid-19 provides just such an example of a new equilibrium. It is too early to learn much from citations to covid-19 research, but as an early indicator Hill and coauthors look at the journals where covid-19 research gets published. They assign each journal a score based on the historical probability an article published there becomes a top-5% most cited publication for its field and year. The figure below compares the size of the pivot for covid-related and non-covid related research to the historical hit rate of the journal that publishes it.
In blue (for non-covid research) and red (for covid-related research), we can see the same pivot penalty as we observed before; article that involve a bigger research pivot are less likely to place in journals that tend to get highly cited. But the gap between these lines is also informative - because there seems to be a new consensus that covid-related research is so important, covid-related research tended to publish much better. Indeed, a pivot to covid that’s measured at around 0.7 appears to have about the same likelihood of becoming highly cited as a non-covid paper that executes a minot pivot measured at around 0.3. All else equal it’s better to make a smaller pivot to covid-related research, but large pivots are not nearly as unattractive as they previously were.
Second, if career incentives constrain scientists to stay in their lane and avoid branching out into new topics, then changing those incentives might also help. Evidence here is more mixed though.
On the one hand, we have a well-known paper by Azoulay, Graff-Zivin, and Manso (2011), which compared the recipients of Howard Hughes Medical Institute (HHMI) support to a control sample of early career prize winners in the life sciences. A key difference between these groups is that HHMI winners are relatively more insulated from the typical academic grant system: they receive at least 5 years of support that is not tied to any specific project, they have a relatively lax initial review, and are typically renewed at least once for 5 more years (and if not renewed, they get two years of funding to help keep the lab open while they search for new grants). In contrast, someone getting support on an NIH grant would often receive just three years of funding, with a comparatively low probability of being renewed.
Azoulay and coauthors use the MeSH lexicon - a standardized set of keywords assigned to biomedical papers by experts - to show HHMI investigators are more likely to explore new research topics than the control group. The MeSH words assigned to their papers tend to be of a more recent vintage, and there tends to be less overlap between the MeSH words assigned to their post-HHMI support papers and their pre-HHMI support papers when compared to the control group. That suggests being (somewhat) freed from the need to secure grant support via normal channels gives scientists the autonomy to explore new fields.
On the other hand though, we have a 2018 paper by Brogaard, Engelberg, and Van Wesep which looked at another set of career incentives that insulates a scientist from the pressure to conform: tenure. Brogaard, Engelberg, and Van Wesep look at 980 economists who at one point belonged to a top 50 economics or finance department between 1996 and 2014, and who were granted tenure by 2004. They track down these economists complete publication record in order to assess how they publish before they get tenure and in the ten years after.
For our purposes, one of their most interesting results is about whether economists use the autonomy of tenure to branch out into new journals or new areas. Unfortunately, Brogaard, Engelberg, and Van Wesep find no evidence that they do. Indeed, in some versions of their statistical tests, economists are slightly less likely to branch out after receiving tenure. And this isn’t just because economists take a breather for a year or two after getting tenure. This effect persists.
This leaves us in a bit of a muddle. For elite scientists, HHMI style support seems to have encouraged them to branch out and try new things. But for economists, the insulation of tenure did not. Is that because tenure is different from HHMI support, or because economists are different from biomedical scientists, or because elite scientists are different from everyone else? We don’t know.
Until then though, I think we can say a few things with confidence. First, the direction of research does respond to money and perceptions of intrinsic value. But it doesn’t appear to be super responsive. That might be because of some incentives scientists face to stay focused: fields may need a critical mass of sympathetic peers before it is individually rational to enter them, and even when a critical mass exists, it is challenging for outsiders to do top work in them, at least initially. Building a new field is probably pretty hard for these reasons; but if you can get the ball rolling, it’s also possible that it can continue going on it’s own momentum.
But more research is needed.
New articles and updates to existing articles are typically added to this website every two weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.
Myers, Kyle. 2020. The Elasticity of Science. American Economic Journal: Applied Economics 12(4): 103-34. https://doi.org/10.1257/app.20180518
Hill, Ryan, Yian Yin, Carolyn Stein, Dashun Wang, and Benjamin F. Jones. 2021. Adaptability and the Pivot Penalty in Science. SSRN Working Paper. https://dx.doi.org/10.2139/ssrn.3886142
Bhattacharya, Jay, and Mikko Packalen. 2011. Opportunities and benefits as determinants of the direction of scientific research. Journal of Health Economics 30(4): 603-615. https://doi.org/10.1016/j.jhealeco.2011.05.007
Akerlof, George A., and Pascal Michaillat. 2018. Persistence of false paradigms in low-power sciences. PNAS 115(52): 13228-13233. https://doi.org/10.1073/pnas.1816454115
Arts, Sam, and Lee Fleming. 2018. Paradise of Novelty - or Loss of Human Capital? Exploring New Fields and Inventive Output. Organization Science 29(6): 1074-1092. https://doi.org/10.1287/orsc.2018.1216
Azoulay, Pierre, Joshua S. Graff Zivin, and Gustavo Manso. 2011. Incentives and creativity: evidence from the academic life sciences. The RAND Journal of Economics 42(3): 527-554. https://doi.org/10.1111/j.1756-2171.2011.00140.x
Brogaard, Jonathan, Joseph Engelberg, and Edward Van Wesep. 2018. Do Economists Swing for the Fences after Tenure? Journal of Economic Perspectives 32(1): 179-94. https://doi.org/10.1257/jep.32.1.179