Skip to main content
SearchLoginLogin or Signup

When Technology Goes Bad

Models of economic growth where sometimes technology gets you killed

Published onMay 16, 2023
When Technology Goes Bad
·
history

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Aug 10, 2023 ()
  • The latest Release (#4) was created on Aug 19, 2024 ().

Innovation has, historically, been pretty good for humanity. Economists view long-run progress in material living standards as primarily resulting from improving technology, which, in turn, emerges from the processes of innovation. Material living standards aren’t everything, but I think you can make a pretty good case that they tend to enable human flourishing better than feasible alternatives (this post from Jason Crawford reflects my views pretty well). In general, the return on R&D has been very good, and most of the attention on this website is viewed through a lens of how to get more of it. But technology is just a tool, and tools can be used for good or evil purposes. So far, technology has skewed towards “good” rather than evil but there are some reasons to worry things may differ in the future.

Why is technology good for us, on average?

I think technological progress has skewed good through most history for a few reasons.

First, invention takes work, and people don’t do work unless they expect to benefit. The primary ways you can benefit from invention are either directly, by using your new invention yourself, or indirectly, by trading the technology for something else. To benefit from trade, you need to find technologies that others want, and so generally people invent technologies they think will benefit people (themselves or others), rather than harm them.

Second, invention is a lot of work, and that makes it harder to develop technology whose primary purpose is to harm others. Frontier technological and scientific research is conducted by ever larger teams of specialists, and overall pushing the scientific or technological envelope seems to be getting harder. The upshot of all this is technological progress increasingly requires the cooperation of many highly skilled individuals. This makes it hard for people who want to invent technologies that harm others (even while benefitting themselves). While people who are trying to invent technologies to benefit mankind can openly seek collaborators and communicate what they are working on, those working on technologies to harm or oppress must do so clandestinely or be stopped.

Third and finally, the technological capabilities of the people trying to stop bad technology from being developed grow with the march of technological progress. Think of surveillance technology in all its forms: wiretaps, satellite surveillance, wastewater monitoring for novel pathogens, and so on. Since it’s easier to develop technologies for beneficial use when you can be open about your work, then that will tend to boost the powers of those empowered to represent the common interest. In a democracy, that process will tend to hand more powerful tools to the people trying to stop the development of harmful technologies.

Now - these tendencies have never been strong enough to guarantee technology is always good. Far from it. Sometimes technologies have unappreciated negative effects: think carbon emitting fossil fuels. Other times, large organizations successfully collaborate in secret to develop harmful technology: think military research. In other cases, authoritarian organizations use technological power to oppress. But on the whole, I think these biases have mitigated much of the worst that technology could do to us.

But I worry a new technology - artificial intelligence - risks upending these dynamics. Most stories about the risks of AI revolve around AI’s developing goals that are not aligned with human flourishing; such a technology might have no hesitation creating technologies that hurt us. But I don’t think we even need to posit the existence of AI’s with unaligned goals of their own to be a bit concerned. Simply imagine a smart, moderately wealthy, but highly disturbed individual teaming up with a large language model trained on the entire scientific corpus, working together to develop potent bioweapons. More generally, artificial intelligence could make frontier science and technology much easier, making it accessible to small groups, or even individuals without highly specialized skills. That would mean the historic skew of new science and technology being used for good rather than evil would be weakened.1

What does science and technology policy look like in a world where we can no longer assume that more innovation generally leads to more human flourishing? It’s hard to say too much about such an abstract question, but a number of economic growth models have grappled with this idea.

Don’t Stop Till You Get Enough

Jones (2016) and Jones (2023) both consider the question of the desirability of technological progress in a world where progress can sometimes get you killed. In each paper, Jones sets up a simple model where people enjoy two different things; having stuff and being alive. Throughout this post, you can think of “stuff” as meaning all the goods and services we produce for each other; socks and shoes, but also prestige television and poetry.

So let’s assume we have a choice: innovate or not. If we innovate, we increase our pile of stuff by some constant proportion (for example, GDP per capita tends to go up by about 2% per year), but we face some small probability we invent something that kills us. What do we do?

As Jones shows, it all depends on the tradeoff between stuff and being alive. As is common in economics, he assumes there is some kind of “all-things-considered” measure of human preferences called “utility” which you can think of as comprising happiness, meaning, satisfaction, flourishing, etc. - all the stuff that ultimately makes life worth living. Most models of human decision-making assume that our utility increases by less-and-less as we get more-and-more stuff. If this effect is very strong, so that we very quickly get tired of having more stuff, then Jones (2016) shows we eventually hit a point where the innovation-safety tradeoff is no longer worth it. At some point we get rich enough that we choose to shut down growth, rather than risk losing everything we have on a little bit more. On the other hand, if the tendency for more stuff to increase utility by less-and-less is weak, then we may always choose to roll the dice for a little bit more.

As a concrete illustration (not meant to be a forecast), Jones (2023) imagines a scenario where using artificial intelligence can increase annual GDP per capita growth from 2% per year to 10% per year, but with an annual 1% risk that it kills us all. Jones considers two different models of human preferences. In one of them, increasing our stuff by a given proportion (say, doubling it), always increases our utility by the same amount. If that is how humans balance the tradeoff between stuff and being alive, it implies we would actually take big gambles with our lives for more stuff. Jones’ model implies we would let AI run for 40 years, which would increase your income more than 50-fold, but the AI would kill us all with 1/3 probability!

On the other hand, he also considers a model where there is some maximum feasible utility for humans; with more-and-more stuff, we get closer-and-closer to this theoretical maximum, but can never quite reach it. That implies increasing our pile of stuff by a constant proportion increases utility by less and less. If that is how humans balance the tradeoff between having stuff and being alive, we’re much more cautious. Jones’ model implies in this setting we would let AI operate for just 4-5 years. That would increase our income by about 50%, and the AI would kill us all with “just” 4% probability. But after our income grows by 50%, we would be in a position where a 10% increase in our stuff wouldn’t be worth a 1% chance that we lose it all.

Different Kinds of Progress

The common result is that, as we get sufficiently rich, we are increasingly willing to sacrifice economic growth in exchange for reduced risks to our lives. That’s a good place to start, but it’s a bit too blunt an instrument: we actually have more options available than merely “full steam ahead” and “stop!” A variety of papers - including Jones (2016) - take a more nuanced approach and imagine there are two kinds of technology. The first is as described above: it increases our stuff, but doesn’t help (and may hurts) our health. The second is a “safety” technology: it doesn’t increase our stuff, but it does increase our probability of survival.

“Safety” technology is a big category. Plausible technologies in this category could include:

  • Life-saving medical technology

  • Seatbelts and parachutes

  • Renewable energy

  • Carbon capture and removal technology

  • Crimefighting technology

  • Organizational innovations that reduce the prospects of inadvertent nuclear first strikes

  • AI alignment research

And many others. The common denominator is that safety technologies reduce dangers to us as individuals, or as a species, but generate less economic growth than normal technologies.

In addition to the model discussed above, Jones (2016) builds a second model where scientists face a choice about what kind of technologies to work on. The model starts with a standard model of economic growth, where technological progress does not tend to increase your risk of dying (whew!). But we still do die in this model and Jones assumes people can reduce their probability of dying by purchasing safety technologies. Scientists and inventors, in turn, can choose to work on “normal” technology that makes people richer, or safety technology, which makes them live longer. There’s a market for each.

This gives you a result similar in spirit to the one discussed above: as people get richer, the tradeoff between stuff and survival starts to tilt increasingly towards survival. If people get tired of stuff, growth in stuff no longer grinds to a halt, but we do siphon more and more R&D effort away from working on normal technology and onto working on safety. That tends to slow, but not stop, our growth in income relative to what it would be in standard models.

One of the interesting propositions in Jones (2016) is that this transition towards safety research is actually well underway. To make this case, Jones presents a variety of evidence on R&D and spending on health. Here, I present a few charts to support the argument that wealthier societies start to favor safety at the expense of income growth, with more recent data than was available to Jones.

To start, one readily available proxy for R&D effort by technological sector are patents.2 Below, we have the share of patents for drugs and medical devices, as a share of all patents, which has risen from a long-run of around 2.0% to over 10%. Note that while this is restricted patents granted in the USA, a large share of these patents belong to foreign firms and inventors, seeking protection for their inventions in US markets.

US medical technology patents as share of all patents. Medical patents defined by US patent classification codes using NBER classification. This data from Our World in Data.

Turning to public sector R&D, for a variety of countries the OECD has tracked how much non-defense government research is directed towards health and environmental objectives since 1981. While the rate of increase varies, there is a general tendency for countries to increase the share of publicly supported R&D that goes towards safety-related research as they grow richer.

Data from OECD Main Science and Technology Indicators - note definitions of environmental/health objectives may not be directly comparable across countries

Regulatory costs associated with health and safety are another proxy for a society’s shift from growth in income to safety and health. In the USA, the costs of major health, safety, and environmental regulations on how businesses carry out their activities must be estimated by law. Singla (2023) uses natural language processing to read these regulatory documents at scale to estimate how the cost of complying with regulation has varied across industries and across time. He finds total compliance costs for major social regulations in the USA has risen from negligible amounts to $1 trillion dollars between 1970 and 2018.

In essence, as we get richer, life gets dearer and worth spending more to protect, and the opportunity cost of investing in safety falls, because more stuff doesn’t excite us as much as it used to. And that process appears to have been underway for at least several decades now.

Are we doomed?

The first models discussed claimed that humans would eventually tire of economic growth (if humans get sufficiently less utility from more stuff as they grow richer) and stop it to preserve their safety. The more sophisticated model in Jones (2016) captures the same idea, but in a more realistic way that doesn’t require a hard stop to economic growth.

If all this is right, then humanity should get progressively safer over time. But that doesn’t feel right; we only have had the chance to really wipe ourselves since we invented nuclear weapons, which was pretty recent in the scope of human history. That particular danger seems to have ebbed now, but more for reasons related to politics then because we invented new safety technologies that mitigated the dangers of nuclear weapons (though we did to that as well). Meanwhile the dangers posed by other new technologies, like genetically engineered viruses, seems to have grown.

Aschenbrenner (2020) builds on Jones (2016) to write a model that captures this feeling of rising danger amid an increasing R&D focus on technologies that preserve our health and safety. In Jones’ more advanced model, discussed in the last section, normal technological progress wasn’t dangerous; it just didn’t reduce mortality. Aschenbrenner (2020) modifies this assumption, supposing that normal technological progress does in fact make the world more dangerous. In particular, he supposes that larger economies are more dangerous. This is because larger economies tend to have more advanced technology (think nuclear weapons and bioterrorism) and more people (and it only takes one bad actor with a super weapon to mess everything up for the rest of us).

When economic growth increases our peril, then the slow and steady process wherein a wealthier society invests more and more into doing R&D to protect our lives may be insufficient to counteract the rising dangers of economic growth. That can give us a world where we focus more and more on developing safer technology, but still find ourselves in ever greater peril. It all depends on the unknowable future path of technological progress. Maybe we can pivot to safety fast enough to offset the rising danger from (slowing) normal technological progress and save ourselves, but maybe not. We don’t really know which kind of world we’re living in, so the paper suggests we act like we live in the one where our actions can make a difference.

Aschenbrenner’s model has another interesting feature. As with Jones (2016), under plausible conditions, the richer we get, the more we pivot to safety. Under the right conditions, that will increase the potency of safety technologies enough to secure our future in the long run. But to get to the long run, we have to first make it through the short run. And the short run is very dangerous; it is a world where technology is dangerous enough to destroy us all, but where we have not yet become rich enough to invest heavily in safety technology. Aschenbrenner, borrowing terminology first used elsewhere, calls this the “time of perils.”

If we’re in a time of perils, the implications for the pace of technological progress are a bit counter-intuitive. Suppose there is some package of reforms we could pass that we believe would accelerate normal technological progress; reforms such as increasing high skilled immigration or increasing funding for science or building a new knowledge resources. If we’re in the time of perils, is this a good idea? Since technology is so dangerous, it would be natural to think that we would want to slow or even stop the pace of technology, since technology makes the world more dangerous.

But in fact, this isn’t necessarily the case in Aschenbrenner’s model. If we are in a time of perils, the longer we spend in this dangerous state of the world, the more likely it is that we have a bad run of luck and destroy ourselves. We want to race through to the other side as fast as possible, by doing everything possible to accelerate economic growth. That will help us more quickly reach wealth levels that lead humanity to pivot more aggressively towards safety.

Maybe you can have your cake and eat it

This discussion has been a bit abstract so far, so let’s close by coming back down to earth and focusing on a present day example of dangerous technology: fossil fuels.

Climate change induced by fossil fuels emissions exhibits basically all of the themes we’ve been discussing so far.

  • Conventional economic growth - fueled by fossil fuels - makes us richer but puts our future in peril.

  • There is an alternative safety technology which tends to give us less conventional growth, but mitigates climate change: for instance, renewable energy, more efficient uses of energy, and carbon removal technologies.

  • As society has grown richer, we have increasingly pivoted into climate change mitigation research. In the case of climate change, this shift is not purely down to us becoming wealthier, but also because evidence and harms of climate change are mounting. Even so, we expect the richest countries to lead the way in investing in climate related technologies, rather than the poorest.

  • Even though carbon emissions in rich countries are falling (see figure below for annual CO2 emissions among OECD countries), it’s far from clear this will be sufficiently fast to reduce major damages from climate change.

Many, many papers have thought about the problems of trading off economic growth with safety, in the context of climate change. One example closely related to the papers discussed above is Acemoglu et al. (2012).

In this paper, Acemoglu and coauthors assume there are two technologies used to produce stuff: one based on fossil fuels, and another based on renewable energy. Use of fossil fuel technology leads to carbon emissions that, left unchecked, result in a climate catastrophe that basically destroys civilization. So we better pivot to safety (which, in this case, is renewable energy technology).

The trouble is, fossil fuel technology is more advanced than renewable energy. So like the other papers considered, switching to the safety technology (renewable energy) entails giving up some stuff in exchange for greater safety. But a key difference between this paper and some of the others is that renewable energy isn’t purely about safety; it still generates energy that can be used for normal economic activity. Indeed, if renewable energy could only produce energy as cheaply as fossil fuels, then there would be no problem at all. People would voluntarily switch to renewable energy, and there would be no existential risk from climate change at all.

So in this paper, a key issue is how you get renewable energy to be as efficient as conventional fossil fuels at creating energy. That requires R&D to be disproportionately directed towards renewable energy. But there’s a problem: the incentive to develop a technology responds to the size of the market for that technology.3 That can create a self-fulfilling doom loop. To simplify their model a bit it looks like this:

  1. Renewable energy is less efficient than fossil fuels

  2. In the absence of government support, no one uses renewable energy and everyone uses fossil fuels

  3. R&D to improve the efficiency of fossil fuels is profitable, since everyone uses fossil fuels and there is a big market for improvements.

  4. R&D to improve renewable energy is not very profitable, since no one uses it. To find a market, you would need to surpass fossil fuel efficiency, which is quite difficult.

  5. Because R&D is disproportionately directed to fossil fuels, fossil fuels become more efficient and renewable energy falls even farther behind fossil fuels.

  6. Repeat.

Thus, in the absence of government intervention, renewable energy falls further and further behind, and climate catastrophe is inevitable.

But there is also some good news. The thing about feedback loops is that they work in two directions. If you can disrupt the loop sufficiently, you can get the dynamics to work for you, instead of against you. For example, Acemoglu and coauthors show a combination of carbon taxes on fossil fuels and research subsidies for renewable energy R&D will push R&D disproportionately towards renewable energy. Eventually it catches and surpasses fossil fuels, such that there may no longer be any need for government intervention at all. Another implication of the paper is that the longer the government hesitates to act, the more costly will be the intervention, because the gap between renewable energy and fossil fuels grows over time.

This is a more optimistic outcome than in the other papers considered. In a sense, if we can develop the right kinds of technology, we can have both more stuff without more risk. We can have our cake and eat it too.

Acemoglu et al. (2012) is about climate change, but some of the lessons can be generalized to other variants of dangerous technological progress. For example, in the AI community, there’s a lot of focus on developing artificial intelligence that is aligned with human interests. We want an AI that does what we ask, and doesn’t do things that might harm us, whether that is telling someone how to make a bioweapon, or trying to take over the world itself. If we can solve that problem, we can use AI as much as we want. Acemoglu et al. (2012) suggests a good policy here is to do what we can to reduce the market for current unaligned versions of AI,4 while giving large research subsidies to research on alignment, with the hope of eventually pushing the aligned version of AI ahead of its unaligned counterparts. The paper also suggests the faster we do this, the less costly it will be.

More generally, whenever there are good and bad forms of a technology, a strong and well designed response from the government can mitigate risk without permanent sacrifices to income. But this isn’t always the case; in Acemoglu and coauthors’ model, they also consider the possibility that there really are no substitutes for fossil fuels. If that’s the case, they get a result not too dissimilar from Aschenbrenner and Jones, where the best you can do involves a permanent growth slowdown (or even cessation).

New articles and updates to existing articles are typically added to this site every three weeks. To learn what’s new on New Things Under the Sun, subscribe to the newsletter.


Cited in the above

What are the returns to R&D?

How to accelerate technological progress

Are ideas getting harder to find because of the burden of knowledge?

Science is getting harder

Highly cited innovation takes a team

Innovators who immigrate

More science leads to more Innovation

Free knowledge and innovation

Medicine and the Limits of Market-Driven Innovation

Pulling More Fuel Efficient Cars Into Existence

When Extreme Necessity is the Mother of Invention

How to impede technological progress

Cites the above

How to impede technological progress

Indexed

The nature of innovation


Articles cited

Jones, Charles. 2016. Life and Growth. Journal of Political Economy, 124 (2): 539 - 578. http://dx.doi.org/10.1086/684750

Jones, Charles. 2023. The A.I. Dilemma: Growth versus Existential Risk. Working paper.

Singla, Shikhar. 2023. Regulatory Costs and Market Power. LawFin WP 47. http://dx.doi.org/10.2139/ssrn.4368609

Aschenbrenner, Leopold. 2020. Existential risk and growth. Global Priorities Institute Working Paper 6-2020. Link.

Acemoglu, Daron, Philippe Aghion, Leonardo Bursztyn, and David Hemous. 2012. The Environment and Directed Technical Change. American Economic Review 102 (1): 131-66. http://dx.doi.org/10.1257/aer.102.1.131

Comments
0
comment
No comments here
Why not start the discussion?