Raise the cost of research, or reduce the rewards of it
“Everything that’s happening is coordinated by someone behind the scenes with one goal: to completely ruin scientific research.”
– Da Shi, in The Three Body Problem by Liu Cixin
Most of the time, we think of innovation policy as a problem of how to accelerate desirable forms of technological progress. Broadly speaking, economists tend to lump innovation policy options into two categories: push and pull policies. Push policies try to reduce the cost of conducting research, often by funding or subsidizing research. Pull policies try to increase the rewards of doing research, for example by offering patent protection or placing advance orders. These have been extensively studied and while they’re not silver bullets I think we have a good evidence base that they can be effective in accelerating particular streams of technology.
But there are other times when we may wish to actively slow technological progress. The AI pause letter is a recent example, but less controversial examples abound. A lot of energy policy acts as a brake on the rate of technological advance in conventional fossil fuel innovation. Geopolitical rivals often seek to impede the advance of rivals’ military technology. (see the article When technology goes bad for more discussion)
Today I want to look at policy levers that actively slow technological advance, sometimes (but not always) as an explicit goal. I think we can broadly group these policies into two categories analogously to push and pull policies:
Reverse push (drag?): Policies that raise the costs of conducting research. Examples we’ll look at include restrictions on federal R&D funding for stem cell research, and increased requirements for making sure chemical research is conducted safely.
Reverse pull (barrier?): Policies that reduce the profits of certain kinds of innovation. We’ll look (briefly) at carbon taxes, competition policy, liability, and bans on commercializing research.
The fact that conventional push and pull policies appear to work should lead us to believe that their reverses probably also work; and indeed, that’s what most studies seem to find. But there are some exceptions as we’ll see.
Let’s start with two studies that have the effect of making it more expensive (in terms of time or money) to do certain kinds of research. Both these studies are going to proceed by comparing certain fields of science that are impacted by a new policy, to arguably similar fields that are not impacted by the policy. By seeing how the fields change relative to each other both before and after the new policy, we can infer the policy’s impact.
Let’s start with US restrictions on public funding for research involving human embryonic stem cells. The basic context is that in 1998, there was a scientific breakthrough that made it much easier to work with human embryonic stem cells. While this was immediately recognized as an important breakthrough for basic and applied research, a lot of people did not want this kind of research to proceed, at least if it was going to result in the termination (or murder, depending on your point of view) of human embryos. A few years later, George W. Bush (who was sympathetic to this view) won a closely fought US presidential election and in August 2001, a new policy was announced that prohibited federal research funding for research on new cell lines. Research reliant on existing cell lines was still eligible for funding, but since most of the existing cell lines were not valuable for developing new therapies, this restriction was more significant than it might naively seem. No restrictions were placed on private, state, or local funding of human embryonic stem cell research, but anyone who received funds for this kind of work would need to establish a physically and organizationally separated lab to receive federal funding for permissible research on existing lines.
To see how this policy change affected subsequent research, Furman, Murray, and Stern (2012) identify a core set of papers about human embryonic stem cell research and RNAi, another breakthrough in the same year and originating in the US that unaffected by the policy, but which was perceived to be of similar scientific import. They then look at how citations to those core papers evolve over time, with the idea that a citation to one of these core papers is a (noisy) indication that someone is working on the topic. Because foreign scientists are unaffected by US policy, they also divide these citations into those coming from papers with US researchers and those without.
They estimate a statistical model predicting how many US and foreign citations a core paper in either topic receives, in each year, as a function of its characteristics. A key finding is illustrated in the following figure, which tracks the percentage change in citations from US-authored articles to human embryonic stem cell research, as compared to a baseline (which includes RNAi papers, and citations from foreign-authored articles). Prior to 2001, citations by US authors to papers on human embryonic stem cells were about 80% of a baseline, but the error bars were wide enough so that we can’t rule out no difference from baseline. Beginning in 2001 though (when the policy was announced), US citations to these papers dropped by a pretty noticeable amount - from roughly 80% of baseline to 40%.
Note though; just three years later, in 2004 things may have been back to their pre-2001 levels. But the restrictions on federal research weren’t relaxed in 2004. So what’s going on?
We’ll return to this later. For now, let’s turn to another study that shows reverse push policies (of a sort) can exert a detectable influence on basic research. This time, we’ll look at a policy whose goal was not to reduce the amount of research, but instead to simply make sure it was done in a safer manner.
In 2008 Sheharbano (Sheri) Sangji died in a tragic UCLA chemistry lab accident involving flammable compounds. This incident and the subsequent criminal case for willful violation of safety regulations by the lab’s principal investigator and the Regents of the University of California galvanized a significant ratcheting up of safety regulations across US chemistry labs. For example, at UCLA, participants in lab safety classes rose from about 6,000 in 2008, to 13,000 in 2009 and 22,000 in 2012, while the number of safety inspections of labs rose from 1,100 in 2008, to 2,000 in 2009 and 4,500 in 2012. This was accompanied by an increase in laboratory safety protocols and more stringent rules for the handling of dangerous chemicals.
To see what impact the increase in safety requirements had on chemistry research, Galasso, Luo, and Zhu (2023) gather data on the publications of labs in the UC system. They end up with data on the publications of 592 labs, published between 2004 and 2017 (note they exclude the lab where Sangji worked). To assess the impact of more stringent safety regulations, they cut the labs into two different pairs of sub-samples, with one half of each pair more impacted by the policy and the other half less impacted.
First, they hire a team of chemistry PhD students to classify labs as “wet”, which are equipped to handle biological specimens, chemicals, drugs, and other experimental materials, and “dry”, which are not and might do computational or theoretical research (these comprise 14% of labs). We should expect safety requirements to not affect dry labs, but possibly to affect wet ones - but not if they rarely work with dangerous compounds. So, as a further test, Galasso and coauthors use data on the chemicals associated with lab publications to identify a small subset of labs that most frequently work with compounds classified as dangerous. Because they need a long time series prior to 2008 for this classification exercise, they can only apply this method to 42 labs, out of which they flag the 8 working most often with dangerous compounds.
Their main finding is that the impact of the increased safety requirements were pretty small. Indeed, comparing the publication output of wet labs and dry labs, there appears to be no detectable impact of the policy at all, even when trying to adjust for the quality of publications by adjusting for the number of citations received, or after taking into account potential changes in the sizes of labs. The effects were not totally zero though. When they zero in on labs using the most dangerous compounds, they find that after safety standards are ratcheted up, the most high-risk labs begin to publish about 1.2 fewer articles per year mentioning dangerous substances as compared to less dangerous wet labs (labs publish an average of 7.7 articles per year in the sample). The reduction is most pronounced for articles mentioning flammable substances, or dangerous compounds that haven’t previously been used by any labs at UCLA. In other words, labs that work with a lot of dangerous compounds significantly dial down their use of dangerous compounds where there is not another experienced lab on campus (perhaps to give them advice).
But that’s actually the only effect we can detect. We might expect the labs that use dangerous compounds the most would see a generalized reduction in their publication output. But that doesn’t seem to be the case. Compared to wet labs that do not as frequently use dangerous compounds, there was no statistically significant reduction in articles published per year. The policy may have redirected some labs away from work with some dangerous compounds, but it looks like these labs were able to do less dangerous research that resulted in the same number of publications.
So overall, the effects of these two reverse push policies is pretty small, though we can’t rule out that bigger interventions would have bigger effects. Restrictions on access to government funding leads to a temporary decline in work on the topic, and increased safety compliance regulation leads to some changes in the practices of the most impacted labs, with no apparent net impact on their research output.
Let’s turn our attention to reverse pull policies, that make certain kinds of technologies less profitable to invent. If the private sector conducts R&D with the goal of making a profit, this will reduce the incentive to conduct certain kinds of R&D and slow technological advance.
The most well-known example of this is a carbon tax. Carbon taxes raise the price of carbon-emitting technologies, making them less desirable in the market relative to clean alternatives. Besides leading consumers and firms to purchase disproportionately more clean energy, relative to carbon-emitting energy, carbon taxes also disincentivize R&D related to fossil fuels.
There is an extensive literature on the effects of carbon taxes on innovation (indeed, one of my dissertation chapters was a model that looked at precisely this issue). I’ve covered some of this literature before in my post, Pulling more fuel efficient cars into existence, which looked specifically at papers that use high fuel taxes, high oil prices, and regulations on minimum fuel economy to assess the impact on innovation in vehicle fuel efficiency. A pretty robust finding is that all these things induce car companies to develop more fuel efficient cars. A fact that I didn’t emphasize in the post, but which also emerges from some of these papers, is that higher fuel prices are associated with less patenting of combustion engine technologies. That’s consistent with carbon taxes impeding fossil fuel innovation.
We can also turn to an extensive literature on the links between market outcomes and R&D in the biomedical sector. As discussed in Medicine and the limits of market-driven innovation, this literature also tends to find pretty reliably that that when medical conditions become more profitable to treat, it induces more related R&D (though the effect on related fundamental science is pretty weak). Most of the papers covered in that post are about positive shocks to the profitability of different medical conditions, but a 2022 paper by Branstetter, Chaterjee, and Higgins, looks at what happens when certain health categories become less profitable. When a particular health category has more generic sales, relative to the sales of branded drugs, (and hence, it is probably a less profitable category to enter) Branstetter and coauthors document a pretty strong and robust decline in the number of preclinical and phase 1 clinical trials that use new drug compounds in that category. That is consistent with the notion that policies that reduce the profitability of treating a particular health category would lower related R&D.
I want to highlight two other papers that study slightly more unusual settings that reduce the profitability of some kinds of R&D. First, let’s consider liability exposure. Innovation might be particularly sensitive to the extent of liability exposure because it’s harder to be confident about all the new properties of a new technology. And we have a bit of evidence consistent with that.
On the positive side, one of the other papers discussed in Medicine and the limits of market-driven innovation is Finkelstein (2004) which examines three different policies that made certain kinds of vaccines more profitable to develop. One such policy was the 1986 creation of the Vaccine Injury Compensation Fund which indemnified vaccine manufacturers from lawsuits relating to adverse effects for some specified vaccines. Finkelstein found that policies like this led to more late stage clinical trials for vaccines that were now less risky to develop. (See the post for more on this and the Branstetter, Chaterjee, and Higgins papers)
Galasso and Luo, two of the authors on the aforementioned paper on safety regulations, have another 2022 paper looking at the chilling impact of changing liability exposure on innovation, albeit via a more complicated path than in Finkelstein (2004). In the late 1980s and early 1990s, widespread problems associated with medical implants from the companies Vitek and Dow Corning emerged. Facing heavy litigation from affected patients, both companies filed for bankruptcy. Without the ability to seek damages from the companies that manufactured the implants, litigants turned their sights on the deep pocketed suppliers of raw materials to these manufacturers: DuPont, Dow Chemicals, Corning, and others. To avoid liability risks, many major suppliers simply stopped selling materials to any manufacturer of medical implants as a general policy.
Galasso and Luo (2022) is interested in the knock-on effect of this sales boycott on innovation in medical devices, measuring innovation in two ways: successful applications to the FDA for regulatory approval and patents. The basic idea is to identify different codes that either the FDA or the patent office attaches to applications, and then to see how the rate of new applications changes over time across implant and non-implant codes. For example, the FDA assigns medical device applications to 304 different product codes, some of which are specific to medical implants and some of which do not. Galasso and Luo find that, after 1990, the growth rates change of implants slows relative to non-implants, such that on average there are 0.1-0.4 fewer applications assigned implant codes, as compared to other medical devices. It’s a bit more complicated to code up patents, but echoing the FDA data, their approach finds, after 1990, the growth rate of patenting in implant technology classes slowed, averaging about 0.3-0.5 fewer patents per year, as compared to non-implant classes.
To close the story, in 1998, the US Congress passed the Biomaterial Access Assurance Act, which exempted the suppliers of materials for implants from liability, so long as the materials were not themselves dangerous and they didn’t design, test, and manufacture the implants. Subsequent to this happening, patenting recovered to pre-1990 levels, relative to non-implant medical devices.
To close out, let’s look at one more unusual policy that can reduce the profitability of certain kinds of innovation: the US patent office’s power of compulsory secrecy. Gross (2022) looks at the impact of this policy during World War II, a war where technological progress played an important military role (not only the atomic bomb, but also radar, penicillin, and improvements across the board in the manufacture and design of war machines).
During World War II, patent examiners could forward patent applications for inventions that “disclose[d] a matter related to national defense” (in their judgment) to a new internal office, the Patent Office War Division, which was staffed with technical staff from a variety of agencies (including the War Department). This office had the power to issue secrecy orders for patent applications. When this happened the patent examination process was halted, and more dramatically, the applicant was instructed not to disclose the invention or face fines, jail time, and forfeiture of the patent! They could still sell or license their invention to the US government, but otherwise they were more or less prevented from commercializing the invention. The appeal process was poorly communicated and very hard to meet. Essentially, if you tried to patent an invention that the patent office believed would reveal important technological secretes to the enemy, you were not only denied a patent, you were also denied the opportunity to commercialize your invention at all! This power was widely used, with about 1 in 25 patents ordered secret. What’s nice for study purposes is that most of the secrecy orders were lifted once the war ended, letting us identify and assess the impacts of the policy.
To begin, Gross shows that secrecy orders were not just dead letters: they really did stop firms from commercializing their inventions. Gross can show this most cleanly for the Du Pont chemical company. He goes over Du Pont’s patents during the war, and pulls out 572 word stems that appear in the titles of chemical patents that relate to new chemical compounds or processes. He then looks for these word stems in the 1944 and 1946 editions of Du Pont’s product catalogs. While about 10% of word stems in public patents appear in these catalogs, essentially none of the new word stems in secret patents do. He can also do a broader version of this exercise, looking at the word stems in all patents and comparing their frequency in secret and public patents to their mentions in contemporaneous books, using the google books corpus. Word stems that are more commonly used in secret patents suddenly appear more frequently in books after the war ended, suggesting the secrecy orders really did conceal new ideas from the public during the war.
Gross goes on to look at what happened to innovation after the war. He first shows that firms which were issued a higher share of secrecy orders were less likely to patent in those technologies, even after the war ended, suggesting that firms may have abandoned R&D in these fields (indeed, firms facing more secrecy were also less likely to patent in general later). And there appear to have been negative spillovers as well. The longer a patent is kept secret, the fewer citations it goes on to receive, and the fewer patents use novel words that it introduces.
Let’s briefly take stock. The reverse push policies we considered worked, but only a bit: stem cell research went down a lot, but only for a few years, and while some chemistry labs shifted away from working with dangerous chemicals, the overall volume of research wasn’t much changed. Possibly they just weren’t very restrictive policies. Reverse pull policies seem more effective, at least for the contexts we studied: higher fuel prices reduced patenting for fossil fuel auto tech, generic drug competition reduced early stage research, liability reduced patenting (though a kind of convoluted channel driven by supplier response), and compulsory invention secrecy seems to have pushed R&D away from these areas.
None of that is too surprising, given the robust evidence we have that normal push and pull policies seem to work. But there are a few other nuances that I think are worth highlighting.
First, in many cases, the impact of these policies was actually to boost innovation in areas not impacted by the policy. We noted this already in Galasso, Luo, and Zhu’s study of the impact of safety regulations on chemistry research. The most impacted labs appear to change the kind of research they do, shifting away from work with dangerous unfamiliar chemicals, but not decreasing their overall research output. That implies they’re increasing their input of research using safer and familiar chemicals.
This is also a major theme in my post Pulling More Fuel Efficient Cars Into Existence: higher fuel prices appear to induce more innovation among electric and hybrid cars, even as it depresses R&D reliant on fossil fuels. Branstetter, Chatterjee, and Higgins (2022) also document a similar phenomenon in their study of the drug industry. Firms facing more generic drug competition tended to increase their investment in early stage research for biologic health products in those more competitive health categories. Importantly, for a variety of reasons, biologics are much more immune to competition from generics.
A second important nuance is that the effects of these policies is pretty uneven across R&D performers. A pretty common finding across these papers is that these kinds of reverse push and pull policies have their strongest impact on marginal R&D performers and their weakest impact on leaders. That is, they mostly reduce total R&D by driving out the research laggards, not by slowing up the research leaders.
To start, let’s return to human embryonic stem cell research. Why did the policy appear to work only for a few years?
Well, researchers are not passive receivers of policy; they have their own agency and will use available resources to pursue their interests if feasible. US scientists interested in continuing research on human embryonic stem cells had several options open to them. For example, they could turn to private and state support for stem cell research, or partner with foreign scientists who could receive public support from their governments. They could also restructure their labs to continue receiving US public funding for permissible research. And this basically seems to be what happened.
In the figure above, we further partition the citations to papers on human embryonic stem cells into more fine grained categories, and again compare citations received over time to a baseline. On the left, we split citations into paper by authors at universities ranked as being among the top 25 in the US and the rest. On the right, we split citations into those by collaborations between US and foreign authors, and those that are solely authored by US authors. We can see a stronger rebound by scientists at top universities (who likely have access to a wider array of resources for conducting research) and those who are able to collaborate with scientists abroad. In this case, enough scientists at top universities and with robust foreign networks were able to find work-arounds that the aggregate impact of the policy was almost totally eliminated after a few years.
Galasso and Luo (2022), looking at the impact of supplier liability, obtain very analogous results. Just as there were ways for well-resourced labs to work around restrictions on federal research funding, well resourced firms were better positioned to weather shocks to supplies. For example, firms could source medical supply inputs if they were able to draw up sufficiently strong contracts and insurance to protect suppliers from liability and this was easier for larger firms. Firms could also work with foreign suppliers who were less exposed to the US legal system. Galasso and Luo find that while every firm in the medical implants sector reduced patenting in the wake of supplier constraints, the effect was substantially smaller for the firms accounting for the largest share of patents. They also find foreign firms (who presumably had readier access to foreign suppliers) were also less impacted.
We see echoes of this result in Branstetter, Chaterjee, and Higgins (2022), which looked at the impact of generic drugs on early stage innovation. It turns out that firms that have recently introduced more drugs in a health category are less affected by the extent of generic competition. Concretely, suppose you have two firms in some health category that are similar, except that one of them is in the top 25% for product introductions in this category and the other is not. If generic competition ramps up by the same amount for each firm, the typical firm will sharply curtail its early stage research in this category - it quits and goes somewhere else. But facing the same increase in generic competition, the leader won’t reduce its early stage research by a statistically detectable amount.
We even see the same thing, pretty strongly, for firms that run into mandatory invention secrecy. As noted above, one of Gross’ findings is that firms that faced more secrecy orders during the war were less likely to patent in the affected technology classes (or even at all) after the war. But it turns out this effect is also driven entirely by small firms (defined here as those with less than 20 patents prior to 1940), and new firms. Large incumbents don’t seem to be impacted by their secrecy experience after the war.
In other words, one of the impacts of reverse pull and push policies is likely to be an increase in the concentration of research among leading organizations.
A related question is whether we have any evidence on whether these policies also impact R&D projects differently based on their potential. Do they curtail the highest value and lowest value projects equally? Or, as with firms, do the policies disproportionately cull low potential projects and leave the high potential ones unaffected?
The evidence we have on this question isn’t totally clear to me. Branstetter and coauthors use a measure of the novelty of early stage research (based on whether a new compound in a preclinical or clinical trial is the first in its class) and don’t find much of a different impact of generic competition on highly novel versus less novel drugs. The effect is relatively uniform. Galasso and Luo look to see if the impact of supplier constraints on medical implant materials is concentrated in high or low cited patents, finding a bit of U-shaped result – the most negatively impacted patents were the least and most highly cited patents. The ones in the middle were relatively insulated to the policy. On the other hand, Gross performs a similar exercise, looking to see if the negative impact of wartime secrecy is different across levels of subsequent use (as measured by how many later patents use the same novel word stems as the secret patent). He finds being declared a secret has no effect on whether the novel words in the secret patent go on to be used in more than 50 subsequent patents; but it does have a strong negative impact on whether more than 10 later patents use the same words. That suggests high impact patents are not really impacted by their experience with wartime secrecy, but more marginal ones are. So a variety of conflicting evidence across these three studies.
One closing thought. Why is it so much more common to study conventional push and pull policies, rather than reverse push and reverse pull policies? I wonder if it’s partly due to political economy reasons about what kinds of policies get implemented in the first place. The beneficiaries of traditional push and pull policies tend to be a relatively small group of R&D performers, who benefit from cheaper research or more profits, while the costs are broadly distributed across consumers (maybe they have to pay a bit more in taxes, for example). A small group with the potential to benefit substantially from a new policy has a strong incentive to make the case for the policy, and relatively low costs to coordinate the affected players to jointly contribute to making that case. Broader society faces large coordination costs, and since the costs are widely shared, each individual only has weak incentives to register their displeasure. So maybe we get a lot of traditional push and pull policies due to this asymmetry of the benefits and costs.
But for reverse push and pull policies, these dynamics work in, well, reverse. Now the costs of policy are concentrated on a small group of R&D performers, but the benefits are (perhaps) broadly shared by society. It’s costly for society at large to coordinate and make the case for reverse-push and reverse-pull policies, and individual incentives to do so might be low. But R&D performers are relatively small in number and these policies may impose major costs on them. For them, it’s worth putting up a big fight about it and it’s not too hard to coordinate everyone to do so.
Obviously this isn’t always the case, since we’ve just concluded by looking at a few studies of reverse push and pull policies. But we can see some evidence for this view even in the policies considered here.
Human embryonic stem cell policy wasn’t further ratcheted up in 2004, after labs had found work-arounds
Studies meant to inform the efficacy of carbon taxes do not actually study carbon taxes, but study changes in fuel/energy prices; perhaps because it’s proven hard for countries to implement carbon taxes!
Medical implant manufacturers successfully lobbied for the Biomaterial Access Assurance Act, which (eventually) exempted their suppliers from many forms of liability
Compulsory invention secrecy was most heavily used during the biggest war in human history. Hardly ordinary times!
So it may well be that reverse push and pull policies don’t work better, because if they did work better, opposition to them would become increasingly sharp.
New articles and updates to existing articles are typically added to this site every three weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.
More science leads to more innovation
Pulling more fuel efficient cars into existence
Medicine and the limits of market driven innovation
Pulling more fuel efficient cars into existence
Medicine and the limits of market driven innovation
Thomson , James A., Joseph Itskovitz-Eldor, Sander S. Shapiro, Michelle A. Waknitz, Jennifer J. Swiergiel, Vivienne S. Marshall, and Jeffrey M. Jones. 1998. Embryonic Stem Cell Lines Derived from Human Blastocysts. Science (282): 1145-1147. https://10.1126/science.282.5391.1145
Furman, Jeffrey L., Fiona Murray, and Scott Stern. 2012. Growing Stem Cells: The Impact of Federal Funding Policy on the U.S. Scientific Frontier. Journal of Policy Analysis and Management 31 (3): 661-705. https://doi.org/10.1002/pam.21644
Galasso, Alberto, Hong Luo, and Brooklynn Zhu. 2023. Laboratory safety and research productivity. Research Policy 52 (8). https://doi.org/10.1016/j.respol.2023.104827
Clancy, Matthew S., GianCarlo Moschini. 2017. Article Mandates and the Incentive for Environmental Innovation. American Journal of Agricultural Economics 100 (1): 198-219. https://doi.org/10.1093/ajae/aax051
Branstetter, Lee, Chirantan Chatterjee, and Matthew J. Higgins. 2022. Generic competition and the incentives for early-stage pharmaceutical innovation. Research Policy 51 (10). https://doi.org/10.1016/j.respol.2022.104595
Finkelstein, Amy. 2004. Static and Dynamic Effects of Health Policy: Evidence from the Vaccine Industry. The Quarterly Journal of Economics 119 (2): 527–564. https://doi.org/10.1162/0033553041382166
Galasso, Alberto, and Hong Luo. 2022. When Does Product Liability Risk Chill Innovation? Evidence from Medical Implants. American Economic Journal: Economic Policy 14 (2): 366-401. https://doi.org/10.1257/pol.20190757
Gross, Daniel P. 2022. The Hidden Costs of Securing Innovation: The Manifold Impacts of Compulsory Invention Secrecy. Management Science 69(4):2318-2338. https://doi.org/10.1287/mnsc.2022.4457