Skip to main content
SearchLoginLogin or Signup

Are ideas getting harder to find because of the burden of knowledge?

More researchers per innovation, because you need more knowledge to solve frontier problems

Published onAug 10, 2021
Are ideas getting harder to find because of the burden of knowledge?
·
history

You're viewing an older Release (#12) of this Pub.

  • This Release (#12) was created on Jun 01, 2022 ()
  • The latest Release (#17) was created on May 17, 2023 ().

Innovation appears to be getting harder. A basket of indicators suggests that scientific discoveries of a given impact were more common in the past, and that getting one “unit” of technological innovation seems to take more and more R&D resources.

To take a few concrete examples:

  • The share of citations to recent academic work by other papers and by patents has been falling significantly

  • New papers have smaller chances of becoming a top cited paper, or of winning a Nobel prize within twenty years

  • While Moore’s law has held for a remarkable 50 years, maintaining the doubling schedule (twice the transistors every two years) takes twice as many researchers1 every 14 years.

You see similar trends for medical research - over time, more R&D effort is needed to save the same number of years of life. You see similar trends for agriculture - over time, more R&D effort is needed to increase crop yields by the same proportion. And you see similar trends for the economy writ large - over time, more R&D effort is needed to increase total factor productivity by the same proportion. Measured in terms of the number of researchers that can be hired, the resources needed to get the same proportional increase in productivity doubles every 17 years.

There are lots of issues with any one of these numbers. But taken together, the effects are so large that it does look like something is happening: it takes more to innovate over time.

Why?

The Burden of Knowledge

A 2009 paper by Benjamin Jones, titled The Burden of Knowledge and the Death of the Renaissance Man, provides a possible answer (explainer here). Assume invention is the application of knowledge to solve problems (whether in science or technology). As more problems are solved, we require additional knowledge to solve the ones that remain, or to improve on our existing solutions.

This wouldn’t be a problem, except for the fact that people die and take their knowledge with them. Meanwhile, babies are (inconveniently) born without any knowledge. So each generation needs to acquire knowledge anew, slowly and arduously, over decades of schooling. But since the knowledge necessary to push the frontier keeps growing, the amount of knowledge each generation must learn gets larger. The lengthening retraining cycle slows down innovation.

Age of Achievement

A variety of suggestive evidence is consistent with this story. One line of evidence is the age when people begin to innovate. If people need to learn more in order to innovate, they have to spend more time getting educated and will be older when they start adding their own discoveries to the stock of knowledge.

Brendel and Schweitzer (2019) and Schweitzer and Brendel (2021) look at the age of academic mathematicians and economists when they publish their first solo-authored article in a top journal: it rose from 30 to 35 over 1950-2013 (for math) and 1970-2014 (for economics). For economists, they also look at first solo-authored publication in any journal: the trend is the same. Jones (2010) (explainer here) looks at the age when Nobel prize winners and great inventors did their notable work. Over the twentieth century, it rose by 5 more years than would be predicted by demographic changes. Notably, the time Nobel laureates spent in education also increased - by 4 years.

Brendel and Schweitzer (2019) and Schweitzer and Brendel (2020) also point to another suggestive fact that the knowledge required to push the frontier has been rising. The number of references in mathematicians and economists’ first solo-authored papers is rising sharply. Economists in 1970 cited about 15 papers in their first solo-authored article, but 40 in 2014. Mathematicians cited just 5 papers in the 1950s in their debuts, but over 25 in 2013.

Outside academia, the evidence is a bit more mixed. In Jones’ paper on the burden of knowledge, he looked at the age when US inventors get their first patents and found it rose by about one year, from 30.5 to 31.5, between 1985 and 1998. But this trend subsequently reversed. Jung and Ejermo (2014), studying the population of Sweden, found the age of first invention dropped from a peak of 44.6 in 1997 to 40.4 in 2007. And a recent conference paper by Kaltenberg, Jaffe, and Lachman (2021) found the age of first patent between 1996 and 2016 dropped in the USA as well.

That said, there is some other suggestive evidence that patents these days draw on more knowledge - or at least, scientific knowledge - than in the past. Marx and Fuegi (2020) use text processing algorithms to match scientific references in US and EU patents to data on scientific journal articles in the Microsoft Academic Graph. The average number of citations to scientific journal articles has grown rapidly from basically 0 to 4 between 1980 and today. And as noted in this article, there’s a variety of evidence that this reflects actual “use” of the ideas science generates.

Splitting Knowledge Across Heads

But that’s only part of the story. In Jones’ model, scientists don’t just respond to the rising burden of knowledge by spending more time in school. They also team up, so that the burden of knowledge is split up among several heads.

The evidence for this trend is pretty unambiguous. The rise of teams has been documented across a host of disciplines. Between 1980 and 2018, the number of inventors per US patent doubled. Brendel and Schweitzer also show the number of coauthors on mathematics and economics articles has also risen sharply through 2013/2014. Wuchty, Jones, and Uzzi (2007) has also documented the rise of teams in scientific production through 2000.

We can also take inspiration from Jones (2010) and look at Nobel prizes. The Nobel prize in physics, chemistry, and medicine has been given to 1-3 people for most of the years from 1901-2019. When more than one person gets the award, it may be because multiple people contributed to the discovery, or because the award is for multiple separate (but thematically linked) contributions. For example, the 2009 physics Nobel was one half awarded to Charles Kuen Kao "for groundbreaking achievements concerning the transmission of light in fibers for optical communication", with the other half jointly to Willard S. Boyle and George E. Smith "for the invention of an imaging semiconductor circuit - the CCD sensor."

The figure below gives the average number of laureates per contribution, over the preceding 10 years. For the physics and chemistry awards, there’s been a steady shift: in the first part of the 20th century, each contribution was usually assigned to a single scientist. In the 21st century, there are, on average, two scientists awarded per contribution. In medicine, there was a sharp increase from 1 scientist per contribution to a peak of 2.6 in 1976, but has slightly declined since then, though it remains above 2.

Author calculations

According to Jones’ the reason for teams is that teams can bring more knowledge to a problem than an individual. If that’s the case, then innovations that come from teams should tend to perform better than those created by individuals, all else equal. That does seem to be the case - research papers and patents by larger teams tend to receive more citations.

The Death of the Renaissance Man

By using teams to innovate, scientists and innovators reduce the amount of time they need to spend learning. They do this by specializing in obtaining frontier knowledge on an ever narrower slice of the problem. So Jones’ model also predicts an increase in specialization.

In Jones’ paper, specialization was measured as the probability solo-inventors patented in different technological fields within 3 years on consecutive patents. The idea is the less likely they are to “jump” fields, the more specialized their knowledge must be. For example, if I apply for a patent in battery technology in 1990 and another in software in 1993, that would indicate I’m more of a generalist than someone who is unable to make the jump. Jones used data on 1977 through 1993, but in the figure below I replicate his methodology and bring the data up through 2010. Between 1975 and 2005, the probability a solo-inventor patents in different technology classes, on two consecutive patents with applications within 3 years of each other, drops from 56% to 47%.

Author calculations

(While the probability does head back up after 2005, it remains well below prior levels and it's possible this is an artifact of the data - see the technical notes at the bottom of this article if curious)

Another way to measure specialization is to see if it has become more difficult to make a major intellectual contribution outside your own field. Hill et al. (2021) attempts to do this across all academic disciplines using data on 45 million papers. Specifically, they want to see if papers that are farther from the author’s prior expertise are more or less likely to be one of the top 5% most cited papers in its field and year.

Because they want a continuous measure of the distance between a new paper and an author’s prior work, they have to come up with a way to measure how “far” a paper is from an author’s prior expertise. They do this using each paper’s cited references. Papers that cite exactly the same distribution of journals as the author has cited in their previous work (over the last three years) are measured as having the minimum research distance of 0. Those that cite entirely new journals that the author has not cited at all in the last three years are measured as having the maximum research distance of 1. In general, the more unusual the cited references are for you, the farther the research is presumed to be from your existing expertise.

As shown below, the larger the distance of a paper from your existing work (which the paper calls a research pivot), the less likely a paper is to be among the top 5% most cited for that field and year. But each color tracks this relationship for a different decade.

From Hill et al. (2021)

We can see the so-called pivot penalty is worsening over time. If I made a big leap outside my domain of expertise and published a paper that cited none of the same journals as my prior work (a pivot equal to 1 in the figure above), in the 1970s that paper had a 4% chance of becoming a top 5% cited paper in it’s field and year. In the 1990s, it had a 3% chance. And in the 2010s, it had less than a 2% chance.

Lastly, let’s consider the Nobel prizes again. Since Nobel prizes are awarded for substantially distinct discoveries, winning more than one Nobel prize in physics, chemistry, or medicine, may be another signifier of multiple specialties. There have been just three Nobel laureates to win more than one physics, chemistry, or medicine Nobel prize: Marie Curie (1903, 1906), John Bardeen (1956, 1972), Frederick Sanger (1958, 1980). If it takes as long as 25 years to receive a second Nobel prize, then we can be sure there was no multiple-winner between 1958 and 1994. There were 218 Nobel laureates between 1959 and 1994, compared to 207 between 1901 and 1958. That means there were 3 multiple Nobel laureates in the first 207, and 0 in the second 218.

A Natural Experiment

While the evidence discussed above is certainly consistent with Jones’ story, stronger evidence would be nice. Most of the above evidence is about how things have changed over time. But we should also be able to see differences across fields. The story predicts fields with “deeper” knowledge requirements should have bigger teams and more specialization. Jones (2009) provides evidence this is indeed the case for patents, and Agrawal, Goldfarb, and Teodoris (2016) provide some strong complementary evidence from mathematics.

Suppose we wanted to conduct an experiment to test Jones’ story. What would that look like? What if we could take a set of similar fields and then randomly raise the burden of knowledge in some but not others. Then, we could see if the fields with higher burdens responded by forming bigger teams, specializing, and spending more time in school. But to do an experiment like that, we would need to dump a bunch of new knowledge into some fields but not others. This isn’t easy to do in a lab. But Agrawal, Goldfarb, and Teodoris argue the collapse of the Soviet Union provides just such a quasi-experimental context.

The USSR has a history of making exceptional contributions to mathematics, but during the cold war, Soviet advances in math were largely kept from the west. International travel and communication between Soviet and non-Soviet mathematicians was strictly controlled, publication was also restricted, and few Russian language journals were translated. Moreover, the extent of Soviet specialization differed across mathematical subfields. In some subfields, Soviet mathematicians had made major advances that were unknown in the west. In others, they had not. When the USSR collapsed in the years after 1990, suddenly all this new knowledge burst out of its confines. (This article discusses a related study about how after the collapse of the USSR, Soviet emigrants seeded previously little known Soviet knowledge across the West). So, Agrawal, Goldfarb, and Teodoris argue, the result was something quite like our ideal experiment - within mathematics, some sub-fields were unexpectedly bequeathed a lot more knowledge and others were not.

What happened in the subsequent years?

Well, across mathematics, the average number of coauthors on an article kept rising (as we’ve seen has been the case across all fields of innovation). But after 1990 it rose faster in the subfields where Soviets had been particularly strong (i.e., the fields that received the largest dump of new knowledge).

From Agrawal, Goldfarb, and Teodoris (2016)

This result is not driven, for example, by the increased number of potential Soviet collaborators (since they were free to collaborate internationally after 1990). The results are the same, for example, if you look only at the size of teams on articles published in Japanese mathematics journals (which tended not to include coauthors from the former USSR) or if you compare countries with lots of immigration from the former USSR and those without.

Agrawal, Goldfarb, and Teodoris also find some evidence that specialization increased more in response to the increased burden of knowledge. Their measure of specialization is the number of distinct mathematical topic codes assigned to articles published over the last five years - a smaller number is more specialized. The figure below plots the difference in the number of codes between mathematicians working in fields of Soviet strength, as compared to those working in fields of Soviet weakness. Prior to 1990, there’s no clear difference between the two, but after 1990 mathematicians working in fields of Soviet strength appear to begin narrowing their focus to a smaller number of topics2.

From Agrawal, Goldfarb, and Teodoris (2016)

Taken together, it nicely complements the broader trends we’ve seen. In fields where the burden of knowledge increased faster, there was more specialization and a greater reliance on teams (the age of mathematicians was not studied).

Why are ideas getting harder to find?

So, to sum up, a host of papers documents that the productivity of research is falling: it takes more inputs to get the same output. Jones (2009) provides an explanation for why that might happen. New problems require new knowledge to solve, but using new knowledge requires understanding (at least some) of the earlier, more basic knowledge. Over time, the total amount of knowledge needed to solve problems keeps rising. Since knowledge can only be used when it’s inside someone’s head, we end up needing more researchers. That costs money.

A few closing thoughts.

First, Jones’ model isn’t the only possible explanation for the falling productivity of research. Arora, Belenzon, Patacconi, and Suh (2020) suggest the growing division of labor between universities and the private sector in innovation may be at fault. As universities increasingly focus on basic science and the private sector on applied research, there may be greater difficulty in translating science into applications. Bhattacharya and Packalen (2020) suggest the incentives created by citation in academia have increasingly led scientists to focus on incremental science, rather than potential (risky) breakthroughs. Lastly, it may also be that breakthroughs just come along at random, sometimes after long intervals. Maybe we are simply awaiting a new paradigm to accelerate innovation once again.

Second, where do we go from here? Is innovation doomed to get harder and harder? There are a few possible forces that may work in the opposite direction.

If breakthroughs in science and technology wipe the slate clean, rendering old knowledge obsolete, then it’s possible the burden of knowledge could drop. In fact, Jung and Ejermo (2014) suggest this may be a reason why the age of first patent declined in the mid-1990s: digital innovation became relatively easy and did not depend on deep knowledge. It would be interesting to see if the three measures discussed above tend to reverse in fields undergoing paradigm shifts.

On the other hand, the burden of knowledge may, itself, make breakthroughs more difficult! As discussed in the article on innovation in teams, there is some evidence that teams are less likely to produce breakthrough innovations. This might be because it’s harder to spot unexpected connections between ideas when they are split across multiple people’s heads. And Kalenberg, Jaffe, and Lachman (2021) also find that older inventors are less likely to produce disruptive patents (though, recall, they also found the average age of first patent was falling). Taken together, it might be that a style of innovation more reliant on teams of older specialists has a harder time creating breakthrough innovations. In that case, the burden of knowledge can become self-perpetuating.

Alternatively, if knowledge leads to greater efficiency in teaching, so that students more quickly vault to the knowledge frontier, that could also reduce the burden of knowledge. Lastly, it may be possible for artificial intelligence to shoulder much of the burden of knowledge. Indeed, artificial general intelligence could hypothetically upend this whole model, if it is disrupts the cycle of retraining and teamwork that is required of human innovators. I suppose we’ll know more in 20 years.

New articles and updates to existing articles are typically added to this site every two weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.


Cited in the above

Science is getting harder

Innovation (mostly) gets harder

Science is good at making useful knowledge

Highly cited innovation takes a team

Importing knowledge

Cites the above

How to accelerate technological progress

Remote work and the future of innovation

An example of successful innovation by distributed teams: academia

Science is getting harder

Related

Building a new research field


Articles Cited

Jones, Benjamin F. 2009. The Burden of Knowledge and the “Death of the Renaissance Man”: Is Innovation Getting Harder? The Review of Economic Studies 76(1): 283-317. https://doi.org/10.1111/j.1467-937X.2008.00531.x

Brendel, Jan and Sascha Schweitzer. 2019. The Burden of Knowledge in Mathematics. Open Economics 2(1): 139-149. https://doi.org/10.1515/openec-2019-0012

Schweitzer, Sascha and Jan Brendel. 2021. A burden of knowledge creation in academic research: evidence from publication data. Industry and Innovation 28(3): 283-306. https://doi.org/10.1080/13662716.2020.1716693

Jones, Benjamin F. 2010. Age and great invention. The Review of Economics and Statistics 92(1): 1-14. https://doi.org/10.1162/rest.2009.11724

Jung, Taehyun and Olof Ejermo. 2014. Demographic patterns and trends in patenting: Gender, age, and education of inventors. Technological Forecasting and Social Change 86: 11—124. https://doi.org/10.1016/j.techfore.2013.08.023

Kaltenberg, Mary, Adam B. Jaffe, and Margie E. Lachman. 2021. Invention and the Life Course: Age differences in patenting. NBER Working paper 28769. https://doi.org/10.3386/w28769

Marx, Matt, and Aaron Fuegi. 2020. Reliance on science: Worldwide front-page patent citations to scientific articles. Strategic Management Journal 41(9): 1572-1594. https://doi.org/10.1002/smj.3145

Wuchty, Stefan, Benjamin F. Jones, and Brian Uzzi. 2007. The Increasing Dominance of Teams in Production of Knowledge. Science 316(5827): 1036-1039. https://doi.org/10.1126/science.1136099

Hill, Ryan, Yian Yin, Carolyn Stein, Dashun Wang, and Benjamin F. Jones. 2021. Adaptability and the Pivot Penalty in Science. SSRN Working Paper. https://dx.doi.org/10.2139/ssrn.3886142

Agrawal, Ajay, Avi Goldfarb, and Florenta Teodoris. 2016. Understanding the Changing Structure of Scientific Inquiry. American Economic Journal: Applied Economics 8(1): 100-128. http://dx.doi.org/10.1257/app.20140135

Arora, Ashish, Sharon Belenzon, Andrea Patacconi and Jungkyu Suh. 2019. The changing structure of american innovation: some cautionary remarks for economic growth. Chapter in Innovation Policy and the Economy, Volume 20. https://doi.org/10.1086/705638

Bhattacharya, Jay, and Mikko Packalen. 2020. Stagnation and scientific incentives. NBER Working Paper 26752. https://doi.org/10.3386/w26752


Technical Notes

For patent data, I use US patentsview data and their disambiguated inventor data. To calculate the probability of jumping fields, I use the primary US patent classification 3-digit class (as in Jones 2009). This patent classification system was discontinued in mid-2015, and it’s possible this is a contributing factor to the uptick observed after 2005. A patent applied for in 2006 only “counts” as a possible field jump if there was a second patent applied for before 2010 and granted before the classification system was discontinued in 2015. This selection effect might be result in an increasingly unrepresentative sample of patents.

Comments
0
comment
No comments here
Why not start the discussion?