Skip to main content
SearchLoginLogin or Signup

What if we could automate invention?

Growth theory in the shadow of artificial general intelligence

Published onSep 06, 2022
What if we could automate invention?
·
history

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Sep 06, 2022 ()
  • The latest Release (#3) was created on Dec 05, 2022 ().

These are weird times. On the one hand, scientific and technological progress seem to be getting harder. Add to that slowing population growth, and it’s possible economic growth over the next century or two might slow to a halt. On the other hand, one area where we seem to be observing rapid technological progress is in artificial intelligence. If that goes far enough, it’s easy to imagine machines being able to do all the things human inventors and scientists do, possibly better than us. That would seem to pull in the opposite direction, leading to accelerating and possibly unbounded growth; a singularity.

Are those the only options? Is there a middle way? Under what conditions? This is an area where some economic theory can be illuminating. This article is bit unusual for New Things Under the Sun in that I am going to focus on a small but I think important part of a single 2019 article: “Artificial Intelligence and Economic Growth” by Aghion, Jones, and Jones. There are other papers on what happens to growth if we can automate parts of economic activity,1 but Aghion, Jones, and Jones (2019) is useful because (among other things) it focuses on what happens in economic growth models if we automate the process of invention itself.

We’ll see that automating invention does indeed lead to rapidly accelerating growth, but only if you can completely automate it. If not, and if the parts you can’t automate are sufficiently important, then Aghion, Jones, and Jones show growth will be steady: no singularity. I’m going to try to explain their results using a simplified model that I think gives the intuitions but doesn’t require me to write any math equations.

A Baseline: Human Driven Innovation

Before getting into Aghion, Jones, and Jones’ model, let’s see what these models predict would happen if innovation continued to be a mainly human endeavor.

To start, we need a way to measure technological progress. For this simplified model, things will be easier if we can just assume technology proceeds in discrete steps, so I’m going to use something a little unusual for economics: the Kardashev scale. This is a hypothetical measure of a civilization’s technological level based on the amount of energy it can harness. In the usual formulation, civilizations come in three types. A Type 1 civilization can harness all the energy emitted by a parent star that reaches its home planet. A Type 2 civilization can harness all the energy emitted by its parent star. A Type 3 civilization can harness all the energy emitted by its galaxy!

A typical Kardashev Scale. Wikipedia

The differences between each type are gigantic. It’s estimated that a Type 2 civilization would use about 1010 more energy than a Type 1 civilization, and Type 3 civilization uses about 1010 more energy than a Type 2 civilization. Let’s make things a bit more manageable by creating smaller 0.1 Kardashev increments. A Type 1.1 civilization uses 10 times as much energy as a Type 1.0 civilization, a Type 1.2 civilization uses 10 times as much energy as a Type 1.1, and so forth. We can think of a staircase that goes up three levels: the first floor above ground is a Type 1 civilization, the second floor is a type 2 civilization, and the third floor is a Type 3 civilization, and there are ten steps on the staircase between each floor. By this definition, we are currently sitting at something like a Type 0.7 civilization, since total energy from the sun to Earth is maybe a thousand times as much as the energy our civilization currently uses.

We’ll measure the rate of technological progress by the length of time it takes us to climb a 0.1 increment up the Kardashev scale. Let’s now make a few assumptions about how economic progress happens. They’re simple and unrealistic.

  • Everyone in the world devotes themselves full time to inventing.

  • Global population grows by 0.7% per year, which means it doubles every 100 years.

  • Inventing gets harder. Every 0.1 step up in our Kardashev scale takes twice as many inventor-years to achieve.

I’ll re-examine these assumptions towards the end of this post. But in our baseline scenario without any automation, this set of assumptions means civilization climbs one 0.1 step up the Kardashev scale every century. Each step is twice as “hard” as the last, in the sense that it takes twice as many inventor-years, but the growth rate of the population ensures the number of inventors also doubles every century, so the overall growth rate is steady. Invention gets twice as hard, but there are twice as many inventors per century.

We can also see that if we tinkered with the growth rate of inventors, the growth rate of the economy would change. If population growth rises to 1.4% per year, the population of inventors doubles every 50 years, and we advance two Kardashev steps every century. On the other hand, if population stopped growing, then our growth rate would get cut in half with each 0.1 step up the Kardashev scale. We would still advance, but it would take twice as long, with each step, to get enough inventor-years to climb a step up the Kardashev scale.

Automating Invention

Now let’s tweak this model. Instead of humans doing all the inventing, let’s assume robots can do it and humans can relax. The key difference between this model and the last is that human population growth is a matter of fertility choices and ever since we escaped the Malthusian trap, those don’t seem to depend much on the size of the economy. Specifically, we assumed the human population grew at 0.7% per year, no matter what our Kardashev level was. Robots though, are something we build using economic resources. That means, as the economy grows larger, we are able to build more robots.

Specifically, let’s assume, like energy, the number of robots we can build also increases by 10x every time we go up a step of the Kardashev scale. This results in a radically different dynamic than when we relied solely on human inventors. Now, every time we climb 0.1 steps up the Kardashev scale, we can throw 10x as many (robot) inventors at climbing the Kardashev scale as we could during the last step. True, innovation gets harder and it takes twice as many (robot) inventors to advance with each step, but since we get 10x as many (robot) inventors at each step, we still advance in 1/5 the time with each step. If it takes a century to get from 0.6 to 0.7 (roughly where we are today), then it takes twenty years to get from 0.7 to 0.8, four years to get from 0.8 to 0.9, and under one year to go from 0.9 to 1.0! This acceleration continues at a blistering pace: once we reach a Type 1 civilization, we’ll get to a galaxy-spanning Type 3 civilization in less than three months!

The Pace of Progress with Robot Inventors

As with the human inventor baseline, we can also tinker with our assumptions in this model to see what happens. Suppose every 0.1 step up the Kardashev scale only increases our ability to manufacture robots by 4x, instead of 10x. In that case, we’ll still have more robots than the 2x needed to advance to the next Kardashev increment in the same amount of time, so growth will still accelerate; just not as quickly. On the other hand, if the number of robots we can build increases by less than 2x for every increment up the Kardashev scale, then economic growth slows down over time (assuming the humans are still just relaxing and not trying to invent). The key is that our ability to improve our inventive capability grows faster than the rate at which invention gets harder.

Taking Stock

This exercise has a lot of simplifications but as a first approximation, it seems to capture our intuitions about the weirdness of our times. If innovation is getting harder, and population growth is expected to slow, then maybe economic growth will steadily slow down over time. On the other hand, if we can automate innovation, the exact opposite can happen (provided invention doesn’t get harder too fast).

The key point is that the second case has a self-amplifying dynamic that is absent from the first. Robot inventors improve the ability of the economy to make more robot inventors, which can lead to accelerating growth. Human inventors enjoy living in a richer economy, but their growth rate is independent of it.

Could we really jump from a Type 1 civilization to a Type 3 civilization in three months though, even in this simple model? Probably not, given our current understanding of the laws of physics. For example, it seems sensible to believe the universe’s speed limit would drastically slow down this process; the edge of the galaxy is close to a million light-years away, so maybe we can’t get a galaxy spanning civilization for at least that long.

That might seem like it’s missing the point of our illustrative model, but it actually points to something quite important: what tends to drive long run growth is not our strengths but our weaknesses. We’ll come back to that.

A More Realistic Model of Automating Invention

This model captures our intuitions well but it’s a bit too simple to help us think through the effects of automation because in this model automation is an all-or-nothing proposition. Either humans or the robots are the inventors. Aghion, Jones, and Jones propose a model that helps us think through the implications of a more realistic case where automation is partial but advancing in its capabilities. They suggest we think of the innovation pipeline as a very large number of distinct “tasks”, each of which is necessary and cannot be skipped. For example, we might divide the innovation pipeline into three discrete chunks: science, engineering, and entrepreneurship.

But we’re not actually fully automating any of those three yet, so let’s go further and subdivide each of those categories into more tasks. Some of the tasks in the “science” category might include doing a literature review, developing hypotheses, designing a research strategy, collecting data, analysis, and writing up results. But in fact, each of these tasks could be subdivided even farther: a literature review might include tasks like doing google scholar searches, looking through citations, scanning titles and abstracts, and so forth. 

The point is there are a huge number of tasks that are necessary to advance technology and we can think about what happens as more and more of them get handed off to the robots. This lets us think through partial, incomplete, but advancing automation of invention.

And that seems realistic to me. We have already experienced many forms of automation of the innovation pipeline:

  • word processing automated certain typesetting tasks associated with writing up our results

  • statistical packages automate statistical analyses that used to be performed by hand or by writing custom code

  • google has “automated” walking the library stacks and flipping through old journals

  • Elicit automates many parts of the literature review process

  • AphaFold automates the discovery of the 3D structure of proteins

  • Automated theorem proving may do just what the name implies.

An artificial general intelligence or some kind of digital brain scan and emulation technology would, in a stroke, automate many remaining cognitive tasks. But even there, the automation of the innovation pipeline would only be partial, so long as we continued to rely on humans to do work in the physical world and do the entrepreneurial work that translates discoveries into social value.

Economic Growth With Incomplete but Advancing Automation of Invention

Once you automate all the tasks, you get to the pure robot inventor framework described earlier. But for now, let’s think about what happens when you’re living in a world where robots can’t do everything, but a greater and greater fraction of the tasks get automated each year. The math is simplest if we assume you automate a constant fraction of the remaining tasks each year. To illustrate, let’s go halfway back to our original model, where humans did all the inventing. But now, in addition to assuming innovation gets harder, let’s imagine all the human inventors are collectively performing tasks that are getting steadily automated.

Specifically to illustrate the point, let’s assume every year we figure out how to automate 2.3% of the remaining innovation tasks that human scientists, inventors, and entrepreneurs had been doing. For example, at the beginning of the dawn of the computer era, maybe humans did 100% of the tasks in the innovative pipeline. In the next year, robots (well, computers actually) handled 2.3% of those tasks, and humans focused on the remaining 97.7%. In the next year, computers take over 2.3% of the remaining 97.7% of the tasks, and humans do the remaining 95.4% of the tasks. Over time, this really adds up. If we steadily chip away at 2.3% of what’s left each year, after a century humans are responsible for just 10% of the tasks they did at the outset. The other 90% is done by the robots. Importantly though, in this example, while the robots get closer and closer to handling all of the innovation pipeline, they never quite get there.

So what happens to growth in this case? It’s tempting to think we get some kind of mix of the two scenarios previously discussed: maybe we start with basically our humans only scenario - advancing 0.1 steps up the Kardashev scale every century - but then by the end of a century, when robot inventors do 90% of the innovative tasks, we’re very close to the growth rate we would get under a pure robots scenario. Since the economic growth rate under humans is constant, while the economic growth rate under robots is accelerating, an average of the two would also accelerate, though maybe not as quickly as in the pure robots case. So we might think successively advancing automation leads to accelerating economic growth.

In fact, this is not what happens. Recall, we have assumed each of the tasks in the innovation pipeline needs to be completed. Invention has started to resemble a class project where each student is responsible for a different part of the project and the teacher won’t let anyone leave until everyone is done. Even though the robots can complete their part of the invention process more and more quickly, we can’t advance another 0.1 steps up the Kardashev scale until the human inventors finish the part of the innovation tasks that only they know how to do.

To see how fast we advance up the Kardashev scale, let’s focus on the tasks that are not automated during some interval of time. It is the time needed to complete these non-automated tasks that determines how long it takes to advance up the Kardashev scale, just as it is the slowest students on a class project that determine when everyone can leave. Assume every task needs twice as many inventor hours for every 0.1 step up the Kardashev scale.

Fix your attention on a single task that is not automated on our way up the next 0.1 step of the Kardashev staircase. What happens to this task can stand in for what happens to all the non-automated tasks, and those are the bottleneck to advancing up the Kardashev staircase. To climb the next 0.1 step, we are going to need to supply twice as many inventor hours to this innovative task as we did on the last step and those inventor hours can only come from human inventors.

From the perspective of getting this task done, the pool of available human inventors is growing by 0.7% per year from general population growth, plus 2.3% per year from labor that is freed up when other tasks that people do get automated. Thus, from the perspective of how much labor we can add to this task, the amount goes up 3% per year, via a combination of population growth (0.7%) and reallocation of workers out of automated tasks (2.3%). With annual growth in the supply of inventors of 3% per year, it takes roughly 23 years to double the inventor-hours working on this task.

In other words, with steadily advancing automation, to climb 0.1 steps up the Kardashev scale we finish each non-automated task after 23 years. The automated ones we might finish more quickly, but that doesn’t help us because we can’t advance until all innovative tasks are complete. In other words, we climb one 0.1 step up the Kardashev staircase every 23 years. Crucially, this rate is constant and does not accelerate. Whether robots do 1% or 99% of the tasks involved in innovation, we advance at one step every 23 years.

Like the other scenarios, we can tweak this one too to see how things change. If either the rate of automation or the population growth rate increased, then we would experience faster growth, though it would never accelerate continuously so long as some tasks were not automated. If either rate of automation or the population growth slowed, then we would experience slower growth. Notably, automation makes possible a world where steady economic growth is possible even if the population of human inventors stagnates. For example, if the population of inventors is stuck at 0% growth per year, human-only innovation tasks still experience a 2.3% annual increase in the supply of innovators. It’s just that these inventors come from tasks that get automated, rather than from newly born babies. In fact, increasing automation of invention means it’s possible for the economy to keep growing even if the population of inventors is shrinking, so long as the rate of automation exceeds the rate of population decline.

But where are the bodies buried?

The preceding economic parable is meant to convey some important intuitions, but it’s very simplified. If you’re not an economist, you might wonder if a more realistic story would overturn some of the major conclusions: where are the bodies buried? So in this section, I want to highlight a few critical assumptions underpinning the above and talk a bit about what might happen if you change them.

Probably the most critical assumption is that each of these innovative tasks is essential,2 in the sense that its not really feasible to do less of one task and more of another and get the same output. As a concrete example, suppose robots are able to analyze data without any human effort, but humans are needed for data collection (for example, they’re the only ones who can run experiments). If you can make just as many discoveries by performing much fewer experiments and much more data analysis, then that means you can substitute a lot more robot labor for human labor, even though each of them can only do one kind of task. Essentially, in this kind of scenario, even though robots can’t actually do each kind of task, they might as well be able to since the tasks they can do are nice substitutes for the ones they can’t do. That gets you a result that’s more like the total automation scenario: accelerating growth. Essentially, for growth to be steady even though we automate more and more of the innovation pipeline, we need the stuff only humans can do to be an important bottleneck. My own view is that this is probably true for an large subset of innovation tasks.

A variety of other assumptions turn out to be less important. The weird Kardashev system of measuring technology that I used is just an explanatory tool that lets me get away with not having to discuss derivatives. We could easily swap it for a more standard measure of technology based on total factor productivity which grows at a continuous rate, for example. We could also arbitrarily cap the advance of technology (for example, by asserting the speed of light is a hard stop on the rate of expansion) at some maximum level instead of assuming growth can go on for every. In that case, the model is as described until you get to the end of science, and then you’re stuck.

It’s also not very important that this economy doesn’t have capital or labor or other inputs to produce goods. Everything still works out fine if you add those details. It’s also not super important that we assumed the entire world is inventors. If we said only a fraction of the population was inventors, we would just need to swap the growth rate of the global population for the growth rate of the inventors. Lastly, in this example, we made our lives easier by assuming all sorts of stuff grew at constant rates (difficulty of technological advance, population, share of automated tasks). We don’t need to do that. In the real world, these things bounce around, so growth rates will bounce around too. In particular, if there is an unusually large leap in the number of tasks that are automated in a given year, that will lead to an unusually large leap in the growth rate. But over time, things will average out. Most importantly, growth won’t continuously accelerate into anything crazy so long as less than 100% of the innovation tasks are automated.

Finally, it’s important to note that steady economic growth over the long run can conceal quite a lot of upheaval going on under the hood. Just because advancing automation doesn’t bring about accelerating growth under these assumptions doesn’t mean everything will be smooth sailing. Economists have looked at things like the impact of automation on the wage rate, market power, unemployment, and others worry about political stability and freedom. I’m keeping the focus narrow today, but that doesn’t mean other concerns are not important.

Two Main Lessons

What are my main takeaways from this exercise?

First, as we might have guessed, if we can automate some parts of innovation, that leads to faster technological progress. But surprisingly, so long as each task is essential, what matters for growth is the rate at which we automate tasks, not the level (with one exception, discussed in the next paragraph). I think this is actually quite consistent with the experience of research during the computer age. As discussed earlier, we have seen a continuous succession of new technologies that allowed us to hand off previously human tasks to machines. And yet, through it all, overall growth has been steady or slowing.

Second, there is one place where the level of automation matters: if each task in the innovation pipeline is essential, then there is a big difference between automating all of invention and automating most of it. In these models, if we can automate the entire innovation pipeline, then growth accelerates and things get crazy at some point. But if we cannot automate everything, then the results are quite different. We don’t get acceleration at merely a slower rate - we get no acceleration at all.

This is because when economic tasks are essential, then we are constrained not by what we are best at, but what we are worst at. We saw a glimpse of this earlier in the discussion of how the speed of light might impose a minimum time for our civilization to a galaxy spanning Type 3 civilization. In that example, if robot inventors can’t figure out how to invent faster-than-light travel, then the ability to invent better robots at an ever faster rate is irrelevant. This weak link drastically slows growth from Type 1 to Type 3 from 3 months to a million years.

This idea has also popped up in other articles I’ve written about. As discussed here, if innovation is about combining different ideas in novel ways, then the nature of combinatorics means the number of possible ideas grows at a faster than exponential rate. In other words, like the robot inventors, growth in the number of possible ideas accelerates. But as various models show, if there are constraints on our ability to locate good combinations from amidst this growing sea of possibilities, growth remains steady and exponential. Again, we eventually become constrained by our weakness, which is not the number of possible ideas but our ability to process them.

Lastly, looking to the future, writing this piece has made me more confident (though not certain) that economic growth is not going to make a clean break with history and enter a new stage of continuous acceleration during my lifetime. I suspect AI will take over more and more of the innovation tasks during my lifetime, but so long as there are essential things that humans need to do, we won’t actually tilt into a dynamic of ever-accelerating growth. And it seems to me AI is a long way from being able to do all the stuff human scientists, engineers, and entrepreneurs do.

But I’m not sure either. Predictions about the future are hard. Maybe an AI will rewrite the rules of the game and find ways to do all the tasks we do. But I suspect we’ll find the full automation of innovation is one of those problems the remaining few percent of tasks turn out to be both more important and more difficult than we expected. Consequently, the day we hand off everything to the machine will seem to be perpetually around the corner, but never quite arriving.

New articles and updates to existing articles are typically added to this site every two weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.


Cited in the Above

Science is getting harder

Innovation (mostly) gets harder

Is technological progress slowing? The case of American agriculture

Combinatorial innovation and technological progress in the very long run

Cites the Above

Is technological progress slowing? The case of American agriculture

Combinatorial innovation and technological progress in the very long run

Are ideas getting harder to find because of the burden of knowledge?


Articles Cited

Aghion, Philippe, Benjamin F. Jones, and Charles I. Jones. 2019. Artificial Intelligence and Economic Growth. In The Economics of Artificial Intelligence: An Agenda, ed. Ajay Agrawal, Joshua Gans, and Avi Goldfarb. National Bureau of Economic Research. ISBN 978-0-226-61333-8

Jones, Charles I. 2020. The End of Economic Growth? Unintended Consequences of a Declining Population. NBER Working Paper 26551. https://doi.org/10.3386/w26651

Acemoglu, Daron, and Pascual Restrop. 2019. Automation and New Tasks: How Technology Displaces and Reinstates Labor. Journal of Economic Perspectives 33(2):3-30. https://doi.org/10.1257/jep.33.2.3

Nordhaus, William D. 2021. Are We Approaching An Economic Singularity? Information Technology and the Future of Economic Growth. American Economic Journal: Macroeconomics 13(1): 299-332. https://doi.org/10.1257/mac.20170105

Agrawal, Ajay, John McHale, and Alex Oettl. 2018. Finding Needles in Haystacks: Artificial Intelligence and Recombinant Growth. NBER Working Paper 24541. https://doi.org/10.3386/w24541

Agrawal, Ajay, John McHale, and Alexander Oettl. 2022. Superhuman science: How artificial intelligence may impact innovation. Brooking Center on Regulation and Markets Working Paper. Link

Comments
1
?
Steven Bond-Sith:

There is another answer to this question that neither of you considered. There is a more realistic branch of endogenous growth theory that Jones never talks about. In this branch of the literature, productivity growth is not ultimately limited by population growth and remains fully-endogenous, not semi-endogenous, because growth is driven by quality improvements, and population growth simply increases variety. Science can still get harder, but it gets harder in the variety dimension, not the quality dimension and hence remains fully-endogenous, not semi-endogenous. See Howitt 1999. Of course neither of these models is sufficiently sophisticated to capture the complexity of innovation, but the assumptions about innovation need to be sufficiently sophisticated to capture particular stylized facts and the ‘number of ideas’ isn’t quite sophisticated enough to capture these stylized facts. I have two papers that discuss these theories in depth, how the Semi-endogenous model leads to some perverse conclusions, and why the Schumpeterian branch of the literature is more suitable. There is also a huge empirical literature that favors the Schumpeterian branch. But I digress.

In the Howitt model there are four components of growth: increase in quality; increase in variety (which is a function of population growth because ideas become harder to find); and increase in capital and labor as usual. (In Jones 1995, on which Aghion et. al. 2019 above is based, there are only the later three components, hence the result). If you add AI, the behavior. of the last two doesn’t change. (They have diminishing marginal returns, which are augmented to constant returns by increasing returns to scale for innovation. Arguably, if AI diverts capital away then “capital accumulation excluding AI” would decline, reducing this component). But if AI increases the share of work that can be applied to innovation, then its effort will be spread across both variety and quality improvement. It could perhaps be thought of as an additional factor in the ideas production function that is an imperfect substitute for labor. It would provide a short-term boost to growth via the variety channel which will diminish as ideas become harder to find that would ultimately return to the long run growth rate. However, where it might provide for sustained growth is if the AI “tasks” (applied to quality improvement) are taxed differently than labor. If profit from AI is taxed less than labor, then it acts like a subsidy on AI-driven quality-improving R&D. If profit from AI is taxed more than labor, it will act like a tax on AI-driven quality-improving R&D. i.e. if AI can increase the number of quality improvement tasks that can be addressed, then there is a sustained increase in growth. However, this will be ultimately limited by how much taxes can incentivize AI-driven quality-improving innovation to be used for this purpose over increasing variety (and the extent that it crowds out labor-driven quality improving innovation). Since variety is a response to quality improvement (See Peretto 2018 EER) this ability is very limited. Overall, it means a short-run boost to growth, but no change in trend growth (unless regulatory regimes change, but this is also true of labor-driven R&D).