Skip to main content
SearchLoginLogin or Signup

Are Technologies Inevitable?

On Path Dependency

Published onOct 30, 2022
Are Technologies Inevitable?
·

In a 1989 book, the biologist Stephen Jay Gould posed a thought experiment:

I call this experiment “replaying life’s tape.” You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past… then let the tape run again and see if the repetition looks at all like the original.”

p48, Wonderful Life

Gould’s main argument is:

…any replay of the tape would lead evolution down a pathway radically different from the road actually taken… Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel.

p51, Wonderful Life

Gould is interested in the role of contingency in the history of life. But we can ask the same question about technology. Suppose in some parallel universe history proceeded down a quite different path from our own, shortly after Homo sapiens evolved. If we fast forward to 2022 of that universe, how different would the technological stratum of that parallel universe be from our own? Would they have invented the wheel? Steam engines? Railroads? Cars? Computers? Internet? Social media? Or would their technologies rely on principles entirely alien to us? In other words, once humans find themselves in a place where technological improvement is the rule (hardly a given!), is the form of the technology they create inevitable? Or is it the stuff of contingency and accident?

In academic lingo, this is a question about path dependency. How much path dependency is there in technology? If path dependency is strong, where you start has a big effect on where you end up: contingency is also strong. But if path dependency is weak, all roads lead to the same place, so to speak. Contingency is weak.

Some people find this kind of thing inherently fun to speculate about. It’s also an interesting way to think through the drivers of innovation more generally. But at the same time, I don’t think this is a purely speculative exercise. My original motivation for writing it was actually related to a policy question. How well should we expect policies that try to affect the direction of innovation to work? How much can we really direct and steer technological progress?

As we’ll see, the question of contingency in our technological history is also related to the question of how much remains to be discovered. Do we have much scope to increase the space of scientific and technological ideas we explore? Or do we just about have everything covered, and further investigation would mostly be duplicating work that is already underway?

I’ll argue in the following that path dependency is probably quite strong, but not without limits. We can probably have a big impact on the timing, sequence, and details of technologies, but I suspect major technological paradigms will tend to show up eventually, in one way or another. Rerun history and I doubt you’ll find the technological stratum operating on principles entirely foreign to us. But that still leaves enormous scope for technology policy to matter; policies to steer technology probably can exert a big influence on the direction of our society’s technological substrate.

The rest of the post is divided into two main parts. First, I present a set of arguments that cumulatively make the case for very strong path dependency. By the end of this section, readers may be close to adopting a view close to Gould’s: any change in our history might lead to radically different trajectories. I think this actually goes too far. In the second part of the essay, I rein things in a bit by presenting a few arguments for limits to strong path dependency.

One more thing before we jump in. I come to this question from the perspective of a certain kind of social scientist and my arguments are rooted in social science. An alternative way to tackle this question could have drawn on specific examples of roads that were open but which were not taken in the history of science and technology. I think such a piece would be fascinating to read, but I don’t think I’m the person to write such a piece. Still, I would be curious to learn if such an approach yields similar or different conclusions. Email me if you know of anything like this!

Part One: The Case for Strong Path Dependency

Empirical Evidence

This is an essay heavy on theoretical arguments (though I think the plausibility of those arguments are still based on empirical evidence), but we will start with what the empirics can tell us. Ideally we would like to take two identical societies, cut them off from each other, and see how their technologies diverge (or do not) over the subsequent 10, 100, or 1,000 years. We can’t do anything like that, but we can look at three different miniature versions of this ideal.

To begin, one series of studies1 looks at the contingency associated with the unexpected death of a researcher. Imagine you have two different academic mini-fields in the life sciences, each of which contains a star scientist. In one of these fields, the star scientist unexpectedly dies; in the other, they live. How do the fields diverge from this accident forward? If they look indistinguishable from each other, that suggests the trajectory of scientific discovery is robust to accidents of history. If they differ, the opposite.

One way the death of a researcher might not matter is if their colleagues step in to make the same discoveries that would have been made otherwise. But we see little evidence of this. When an elite scientist dies, their collaborators stop doing so much work in the area, as compared to the collaborators of otherwise similar elite scientists who do not die. Moreover, this effect is strongest for those working on the most similar areas, who we might expect would be most likely to serve as “backup” discoverers.

What about other people? When an elite scientist dies, new entrants tend to enter the area, but their work seems to differ from the work of elite scientists who live: the entrants’ work references the existing papers in the micro-field less, it brings in new concepts, and it results in more highly cited papers. In other words, at the level of these micro-fields, it does seem like accident and contingency play a role in the evolution of science. If a scientist dies unexpectedly, it changes the set of discoveries that subsequently happen.

Our second set of empirical evidence comes from different nation states. Even though nation states are interconnected in the modern world, we find they all tend to specialize in quite different technologies. We could interpret that as evidence that different human societies climb the technology tree in different ways, and that if they were not connected to each other by trade and immigration, they might diverge more and more over time. That could also imply strong path dependence.

But we should be careful about interpreting the data this way. An alternative and quite reasonable explanation is that, in fact, each nation has the capability to invent the same technologies, but each nation chooses to focus on a narrow specialty because they know they can trade with other nations. If I’m Taiwan and I’m good at inventing new kinds of semiconductors, then it may make more sense for me to double down on that and to simply buy mRNA vaccines from a nation that specializes in that kind of innovation, rather than try to develop both those capabilities domestically. In this case, the fact that some nations develop mRNA vaccines and others semiconductors doesn’t mean in some runs of history we’ll develop one but not the other.

But part of these international differences also seems to be down to path dependent luck, rather than purely the outcome of nations rationally choosing to specialize in different technologies. We can see this pretty clearly from migration patterns.2 It’s a quite robust finding that if your country lucks into getting access to new kinds of knowledge, because a policy change or geopolitics led to international migration of inventors skilled in that knowledge, then that knowledge takes root in the new soil. For instance, getting an accidental dose of chemistry knowledge from migrants skilled in chemistry encourages the creation of a domestic chemistry innovation industry. That implies the accidents of which technologies we discover, as a global society, also matter a lot for the ultimate direction of technology.

Lastly, geopolitics provides a few other cases where the scientific community is cleaved in two, each of which evolves in parallel with only limited communication across the divide.3 Studies looking at science during World War I and mathematics during the Cold War both find significant divergence occurs pretty rapidly when the scientific community is fractured. The extent of divergence is difficult to quantify, but at least for Cold War mathematics, they are large enough that when mathematics was again unified in the wake of the collapse of the USSR, the impact on mathematics were large enough to be noticed both in anecdote and in quantitative data.4 Again, that is consistent with the notion that if we reran history, things might have been quite different.

So it seems undeniable that there must be some path dependence. But these examples aren’t able to tell us much about how strong it is. For that, we need to look to more complicated arguments.

The Landscape of Technological Possibility

For things in a parallel universe to be really different, it needs to be the case that many different technological pathways are feasible, under the laws of physics that govern a world where humans evolved. Is that plausible?

At base, technologies depend on identifying and then exploiting various regularities in the world. So if we can get a sense of how many different regularities there are out there, that might give us a sense of how many different kinds of technology are possible. Purely for the sake of illustration, consider the set of equations that are named after their discoverer. Writing in February 2022, wikipedia tells me there are 272 such named equations ranging from Clarke’s equation (describing combustion) to the Young-Laplace equation (describing fluid dynamics). Let’s just go with that for now and assume it gives us a rough estimate of how many exploitable regularities there are in the world. A couple hundred regularities isn’t that many. If technologies are just about applying these regularities towards different purposes, and if parallel histories and futures are governed by the same regularities, that might lead us to think most societies will end up discovering and building the same technologies eventually. 

But a couple hundred regularities to build on isn’t really the right framework. That’s because technologies exploit more than one regularity in nature at a time. One definition of technology that I like is from Brian Arthur, who roughly argues that technologies orchestrate and coordinate combinations of regularities in nature. The regularities in nature are like lego building blocks that we connect up to do new things. And once things get combinatorial, the number of possibilities change quickly.

If we’ve got 272 regularities to work with, for example, then the number of possible combinations of these regularities is two, raised to the power of 272 (since every regularity can be used or not used: two options). But 2272 is more than some estimates of the number of atoms in the observable universe. And that’s got to be an undercount of the number of possible technologies, since it counts all technologies that use the same principles together as the same thing, even if they orchestrate them in vastly different ways or use them in vastly different magnitudes.

So if technology is about weaving together different combinations of regularities in nature, then there are an astronomical number of possible technologies. There is another way of thinking about this too, but which arrives at similar conclusions.

Instead of thinking of nature giving us regularities that we link up to do stuff, we can think of new technologies as themselves creating new kinds of regularities that can be exploited by subsequent technologies. Just as a grandfather clock exploits the regularity of Earth’s gravity without much worrying about how gravity works, a speedometer exploits the regularity of its time-keeping clock without much worrying about how the clock keeps such accurate time. From this perspective, technologies are still about orchestrating sets of regularities, but the number of available regularities grows as technology keeps giving us new capabilities.

What’s the upshot? There are an inconceivably vast number of possible technologies out there. As Martin Weitzman, who was among the first economists to sketch some of the implications of combinatorial ideas of technology, writes:

Eventually, there are so many different types of materials, or sources of energy, or methods of construction, or anything else, that the number of possible combinations becomes astronomical. The degree of “path dependence” becomes ever greater over time because the number of viable path-idea-combinations not taken - thereby foreclosing the future development of yet further offspring-path-ideas - expands much faster than the rest of the economy. In such a world, there is a rigorous sense in which the state of present technology depends increasingly over time on the random history that determined which parent technologies happened to have been chosen in the past.

The world view offered by this model is in the end antithetical to determinism. Even while showing some rules and regularities, the evolution of technology basically exhibits a declining degree of determinateness. Eventually, there are so many potential new ideas being born every day that we can never hope to realize them all. We end up on just one path taken from an almost incomprehensibly vast universe of ever-branching possibilities.

p357, Recombinant Growth

Does Technology Really Work This Way?

It’s a compelling idea, but we might wonder if there’s much evidence that innovation really works this way, proceeding primarily by combining technologies developed earlier. If we want to move beyond case studies that describe the history of individual inventions (W. Brian Arthur describes several such examples here), patents provide our best evidence of this. Patents are far from synonymous with invention, but they do represent a very long-running set of data that describes in some detail millions of individual inventions. We’ll use evidence from them a fair amount in this essay, though where possible I’ll also draw on other strands of evidence.

There is a fairly extensive line of work that attempts to marry this combinatorial perspective with patent data. To do so, this literature needs to identify some kind of proxy for the underlying ideas and technologies which are combined into new technologies. The main proxy used is the technology classification system devised by the patent office to help organize patents. For example, a patent might be classified as belonging to the “internal combustion engine” class or the “artificial intelligence: neural networks” class. A patent might be classified in both these classes, which could be taken as a sign that it somehow combines these two kinds of technology into a new invention. Alternatively, some papers look at the citations patents make to each other, and assume that a patent citing both an internal combustion engine patent and a neural networks patent is somehow integrating these two kinds of technology.

One thing this literature reliably finds is that patents that combine disparate ideas tend to be important patents. They get cited more heavily, and they seem to promote new and similar patents that build on the combinations they establish.5 That suggests there really is something important about this combinatorial idea.

Another line of work finds evidence of a “hierarchy” of technologies, which is consistent with the notion that combinations of pre-existing technologies enable the creation of new technologies with new capabilities. Again, this literature relies on the patent classification system, but focuses on finding networks wherein patents of one type tend to cite patents of another. This literature finds that when the patents in one technology class heavily cite another, activity in the cited class predicts subsequent patenting activity in the citing class.6 For example, patents classified as “tools” heavily cite “metal working” patents. When there is a surge of activity in metal working patents, that tends to predict a surge in tool patents in a few years time. That is consistent with the notion that better metal working technology can be applied to the creation of better tools.

That suggests a hierarchy of technology, with some technologies providing components or regularities or inspiration for others. These same technologies, may in turn rely on still other classes to provide new building blocks or principles. Eventually, this process has to circle back on itself or end somewhere. The preceding discussion predicts this process terminates in the regularities inherent in the natural world. One crude way to assess that is to see if a patent cites a scientific paper, since science tends to be concerned with discovering, describing, and understanding the regularities that govern that natural world. Indeed, we do have a variety of evidence that when a particular domain of science grows (for example, because it receives more funding), then there is also an increase in technologies reliant on that science.7 And it is also the case that technologies tend to more heavily cite technologies that are more reliant on science, as evidenced by their citations to the scientific literature.8

Putting this together, we end up with a picture where new combinations of ideas seem to be disproportionately important in the history of technology, and where there is a hierarchy grounded in regularities of the natural world, which I’m proxying by a reliance on science, and climbing up from there.

Necessary but not Sufficient

Once you accept that we’re combining different technologies to create new ones, the logic of combinatorics insists that there must be a vast world of possible technologies. Since human society uses so many different kinds of technologies, an even vaster set of possible technologies is necessary for there to be strong path dependency - to take an extreme case, if there is only one suite of technologies that can be discovered, then in every version of history where technological progress happens, that’s the one we’ll discover. So having many options is necessary for there to be path dependence. But a large technological landscape is not sufficient for path dependency to be strong. It must be not only that different kinds of technological stratum are possible, but also that different ones would be selected, if we reran history.

So how do we choose the bundle of technologies that undergird society? Well, a basic principle seems to be all else equal we prefer the technologies that achieve some desired end best. If we take that as a given, then the main question boils down to how many options do we consider before we make our choice, and how do we decide on which options to consider. For example, if we could consider every option and select the best, then history would always look the same. But it turns out we probably do not actually consider most options.

A Shortage of Exploration?

The most direct measure we have on what share of the scientific and technological options are “available” comes from the incidence of multiple simultaneous discoveries. There are tons of famous examples of the same invention or idea being independently discovered by multiple people; that would seem to suggest we may actually be doing so much exploration that the full space of possible inventions and discoveries is being canvased.9 That would, in turn, imply weak path dependence. In every possible universe, someone always discovers the wheel and everyone else recognizes its utility.

However, when you look at this a bit more systematically, it turns out the incidence of multiple independent discovery may be significantly lower than implied by the prevalence of famous examples. Studies have looked at this issue with surveys, datasets of active research projects, text analysis, and patent infringement claims. In all cases, the incidence of multiple independent discovery is pretty low: the chances a randomly selected project has another person working on the same thing in a given year is probably less than 3% (possibly much less).

That implies lots of ideas might be missed for a very long time. Suppose you are working on a new invention, and there is a 3% chance in each year that someone else is also working on the same thing. If you get hit by a bus and fail to invent the thing you’re working on, the probability no one else makes the same invention after 20 years is about 55%, given an annual chance of 3% that someone else is working on it.

But it’s not actually quite so simple. While it seems to be true the incidence of simultaneous discovery is relatively rare on average, it doesn’t seem to be nearly so rare for ideas that are judged in advance to be important. For example, proteins that have features typically associated with getting more citations also attract more teams of scientists trying to find their structure. Scientists who tend to write more highly cited work also report getting scooped more often. And patents of more valuable inventions seem to face a higher incidence of subsequent patent applications infringing on them. That all suggests to me that ideas we think will be more important get identified as such by more people and hence have a higher chance of multiple independent discovery.

So the question is, how well can we forecast what ideas will be big? It seems to be generally true that scientific discoveries and inventions are highly skewed in their impact: a small number of ideas account for the bulk of human progress. What if we assumed only 3% of ideas were really important and also that we can roughly predict, in advance, which 3% matter? That would generate most of the facts about independent discovery I’ve documented so far: only about 3% of ideas have multiple people working on them, but those 3% include all of the big ideas. If that’s true, then replay the tape of technological history and you’ll be pretty likely to hit on the same big ideas in each run, since multiple people in every run of history identify them.

Forecasting hits

But, although we do observe high impact ideas have a higher probability of being discovered by multiple people, we don’t know if the share of high impact ideas with multiple discoverers is high or low. My sense is the share is probably not too high, because it seems clear that it is in fact quite hard to forecast which will be the big ideas. Venture capital, for example, is an industry that is precisely trying to predict the big things and it’s not easy! These firms are constantly worrying about missing the next big thing, which suggests that it’s not easy to identify in advance which ideas will be big.

Another line of evidence about the difficulty in forecasting the impact of ideas comes from the prevalence of so-called knowledge spillovers. These occur when knowledge created by one party benefits another. We have a lot of evidence that knowledge spillovers are not merely a theoretical possibility, but are ubiquitous in technological innovation.10 Most of this evidence comes from patents in one way or another.

For example, we can look at who benefits when one party decides to do more R&D. When the National Institute for Health funds research on a particular disease, it’s more likely that a patented medical treatment for a different disease cites the funded research. Or when the US Department of Energy funds research on a particular technological area, it tends to result in additional patents in similar - but not identical - technological areas. And in general, when firms conduct more R&D, that seems to help other firms, who work on similar topics, get more patents or profit. Or we can also look more directly, and try to see where new technologies get their ideas from. A study of tens of thousands of patents for agricultural technology found that they mostly derived their ideas from non-agricultural sources. In every case, it’s quite common for one group to do the research and another to adapt it for their own purposes.

The extreme prevalence of knowledge spillovers is consistent with major difficulties in forecasting the value of different scientific ideas and technological innovations. If it was easy to identify what kinds of discoveries and inventions would be most useful for some sector x, we would expect sector x to just do the R&D themselves, rather than rely on the fortuity of another sector doing the R&D. That would look like most firms relying on internal R&D, which is not what we observe. Now, this argument isn’t the only possible explanation for the ubiquity of knowledge spillovers - you might also just choose to free ride on someone else’s research, if you knew they were going to do it anyway and that it would benefit you - but anecdotally this doesn’t seem to me to be the main story.

Lastly, we have at least a bit of direct evidence about the factors that drive the direction of science and in many cases they are quite unconnected from some objective metric of how important a discovery might be. The gender composition of a field, for example, seems to matter significantly for the choice of what kinds of things are researched. It is not only that men and women may choose to research different things because they believe different topics are important (as, indeed, any group with different life experiences may come to see the value of things differently), but also that men seem to do different kinds of research if they are operating in a field where women are better represented.11

All this further suggests big path dependency: the landscape of possibilities if massive, and we don’t seem to be exploring enough of it for researchers and inventors to be overlapping extensively on what they choose to work on. If we reran history, we might end up exploring quite different parts of the technological landscape. And, incidentally, this work suggests that if we increased the amount of R&D we do, it’s likely we would have no shortage of stuff to work on that isn’t currently being looked at.

The Evolution of Technology

There is another complication though. The way we explore the scientific and technological landscape of possibilities is not purely random: we disproportionately focus our attention on possibilities that are close to the current state of the art. This can create a strong form of technological path dependence, because once you start going down a path, you stick with it, rather than jumping off it and exploring far-flung corners of the space of technologies.

One of the clearest pieces of evidence for technological lock-in is the statistical regularity that patents in the recent past are a very good predictor of patents in the present. If a particular kind of technology has had 10% more patents in the recent past, a common finding is that it will have 3-10% more patents this year as well.12 To the extent that is true, it means that in any race between two competing technologies, all else equal the one that’s behind can never catch up, at least in terms of the number of patents generated (not a great metric I know, but I’ll try to provide more). Once we get started down a particular technology trajectory, we commit!

There are several reasons for this “rich get richer” effect in technologies.

First off, there is a view, expressed in the cultural evolution and evolutionary economics models, that much of innovation follows an evolutionary dynamic. People can see which technologies are currently best, and try to improve them, either purposefully or by blind tinkering. That’s probably not a bad strategy - it worked for biology. But it also means that whichever technologies are currently dominant are exactly the ones that the most effort will be directed at improving, further cementing their dominance.

There are a few places we can see this evolutionary dynamic clearly. Computer programming is one field where you can study these dynamics particularly well, since code can be tracked from one program to another almost analogously to DNA. Studies of programming contests have shown, for example, that the most common strategy people use is to make small changes to the current leading program (actually, these changes tend to incorporate elements from other successful programs - technology is combinatorial!).13

Rising production correlates with falling cost

Evolutionary dynamics have also been proposed as the fundamental cause of so-called learning curves. A learning curve is another statistical regularity that has been observed frequently: across dozens of industries, there is a high correlation between total cumulative production of a good and the cost of production. Expressed in another way, doubling total production is often associated with a consistent proportional decline in the costs of production. For example with solar panels, every time the world doubled total installed capacity, cost tended to fall by about 20%. This lasted for decades.

Now, I happen to think these learning curves are overrated. It turns out this kind of tight correlation can arise for a few different reasons, some of which are kind of spurious.14 But I don’t think there’s nothing there: I do think that the costs of producing something probably do fall when production is increased and that the reason this is so is because of something like an evolutionary dynamic. As you engage in more production, you get more experimentation and tinkering with the production process, and that can lead you to find improvements. At the same time, these improvements get harder to find over time, which is encapsulated in the notion that you have to double total production to get the same benefit (and doubling gets harder over time). There has been some effort to show this kind of evolutionary dynamic can indeed generate learning curves. Moreover, granular studies of what is going on “under the hood” for firms that exhibit learning curves does identify various problems that are identified and solved, though generally in a more proactive way than happens in biological evolution via mutation.15

Lastly, I suspect evolutionary dynamics such as “copy and tweak the leading technology” help explain another stylized fact about innovation: the number of times a patent gets cited is robustly correlated with it’s value. For this argument to work, suppose the following three things are true:

  • People can identify the best new patented inventions

  • People mostly follow an inventive strategy of tweaking the best new patented inventions

  • You’re more likely to cite an invention you are tweaking16

If this story is true, it would establish a correlation between the number of citations received and the value of a patent, which we do seem to observe. Though this is hardly proof of much, I include this argument because it seems to apply to all patented innovations, rather than the subsets for which we have some better data.

Other Reasons for Technological Inertia

Another related rationale for why our society seems to commit to certain technologies, rather than exploring more broadly, comes from the market. Private sector R&D is quite responsive to what will make money in the market. Drug companies focus more effort on diseases that affect more people, or people with a greater ability to pay for treatments.17 Car companies develop more fuel efficient cars when the price of fuel rises, or when required to do so to comply with regulations.18

That’s all to be expected; we live in a market economy. But more relevant for the issue of technological lock-in, this market-driven R&D may disproportionately focus on improving technologies that are already close to being market-ready rather than exploring more widely to find new and unusual solutions to market needs. We have particularly good evidence of this in medicine, where we can see that the kinds of factors that have a strong influence on private sector R&D have only a weak influence on more basic science, preclinical trials, and the novelty of drugs that are studied. The evidence from automobiles is less clear, mostly because it hasn’t been studied as much, but there too the majority of fuel efficiency innovations seem to be relatively minor and incremental stuff.

The issue here is that, if you’re looking to improve a technology, there is a bigger market for improving technologies that already have a lot of users. And so the market incentivizes us to do more research on improving technologies that are already dominant. But then it may be that a lot of this market-driven R&D is going to be incremental and tied closely to existing ways of doing things.

Another reason why technologies and scientific fields have so much persistence is because it is quite hard for people trained in one domain to switch to another. In the sciences, some studies have tried to quantify shifts in research direction, and they find the bigger the switch a scientist makes, the less likely they are to publish a top cited paper. We observe a similar thing among inventors: when inventors being patenting in a new technological class, their patents are usually less valuable by various metrics. If switching into a new field is hard, that creates an incentive for scientists and inventors to keep working on what they are working on. Again, technological paradigms persist.19

Finally, technological inertia can also be sustained because of the high costs of switching to an alternative paradigm. Technologies tend to be useful in a broader ecosystem of complementary technologies and this can stymie the prospects of a new and different technology. A new technology may be better along some metric, but if realizing these benefits requires switching many complementary technologies too, the cost of switching may deter adoption.

This can prevent the new disruptive technology from gaining the kind of traction that would allow it to benefit from the rich-get-richer evolutionary dynamics discussed above. If your technology is better than all the alternatives, but nobody uses it, then it might not be better for long. In evolutionary models of technological progress, it is the most widely used technology that attracts more R&D attention and continue to improve.

Picking winners

So far, the model of technological history that I’ve presented looks kind of like the following. There is a vast space of possible inventions, and from this set we only actually investigate a small share. We do have some ability to ensure the limited share of inventions we target for R&D effort are more likely to be the high impact ones, but only very imprecisely - many impactful ideas are only investigated by one person, and likely many others are not investigated at all. From among the small set of inventions that we do create, an even smaller share get adopted more widely and then enjoy substantial “rich-get-richer” dynamics. An important issue for path dependency is whether the inventions that get an early lead and then enjoy rich-get-richer dynamics are selected for reasons that are likely to be robust across different runs of history, or whether they are contingent.

We have some quite good evidence on the contingency of technological selection based on global crises. The covid-19 global pandemic, for example, led to a huge expansion in R&D devoted to discovering a vaccine and enabling remote work. The oil shocks of the 1970s led to a massive shift in research related to energy efficiency and sustainability. And World War II accelerated the uptake of advanced manufacturing technologies for aircraft, besides spurring on the development of radar and the atomic bomb.20

The technologies that we ended up with after these crises was to a large degree based on what technologies were feasible to develop at the onset of a crisis. Crises are characterized by an urgent and immediate need - what is to hand will be preferred. But once some technology is jumpstarted by a crisis - whether mRNA vaccines, remote work technology, renewable energy, advanced manufacturing, atomic bombs, or radar - the evolutionary dynamics discussed above can begin to take over. As the crisis ends, these technologies may continue to be developed, and as they improve they may outcompete less developed rival platforms. It may be the case that, had there been no crisis, these rivals would have slowly won out and go on to enjoy self-amplifying evolutionary feedback. But since they were not ready for primetime when the crisis struck, they were never jumpstarted. If we believe pandemics and geopolitics are contingent events - even if we merely believe their exact timing could differ by a decade or two - then we must also believe there is some degree of contingency in the history of technology.

What about the more mundane decisions of every day life? Here a case can be made for less contingency. For example, we might think the selection of one technological paradigm over another via market forces is pretty robust, because it represents the collective judgment of many different consumers. If people make their decisions independently of each other and can detect quality differences, even if very imperfectly, then the market will tend to converge on the better product. And in this case, we might expect the same kind of selection to happen again, if we replayed the tape of technological history.

But, even in the case where winning technologies are selected by millions of different decisions that take happen in the marketplace, it’s not a given that contingency is low. People do not actually make decisions independently of each other, and the fact that people communicate with each other can allow social contagion effects to introduce radical contingency, as viral successes ride the contingent waves of consumer enthusiasm.

A famous 2006 study by Salganik, Dodds, and Watts showed this explicitly in an experimental context where participants chose which songs to download. Although this study is not about innovation, it is quite a close approximation to our experimental ideal: each participant was given the same set of 48 novel (to them) songs to download, but participants were grouped into one of eight different social networks. Within each network, participants could see what other people in the network chose to download, providing a limited channel for social contagion effects to play out. Across these eight different social networks, the study observed how the rankings of songs and the number of downloads varied. In essence, they replayed the tape of history eight times, and assessed how many times the same songs ended up on top or bottom. The paper also grouped some participants into a no social network setting, so they could see how songs were preferred when everyone had to make decisions independently.

The result: there was a ton of variation, and it was much harder to predict which songs would rise to the top in the worlds where peers could influence each other’s choices. Things were not totally chaotic - the best songs (as rated by people in the no-social-network setting) were more likely to rise to the top than the worst songs - but there was still a great deal of contingency and unpredictability.

An Interlude on Evolution and Path Dependency

We opened this essay looking at a few empirical cases that approximate the ideal of replaying the tape of scientific and technological history, albeit at quite small scale. But if we view technological progress as an evolutionary process occurring in a gigantic space of possibilities, then the history of biology could potentially provide a much larger scale test case. After all, life also exists within a giant space of possibilities (which can be shown by similar combinatorial arguments) which it navigates using an evolutionary algorithm.

As evidence for radical contingency in the history of life, Gould pointed to the divergent paths of the evolution of life in Australia and South America during the Eocene, when each was a large island continent. During this era, each landmass lacked placental carnivores, and quite different apex predators evolved in the different island continents: large birds in South America, marsupials in Australia. Gould views this as pretty direct evidence of the radical contingency of evolution: rerun history and maybe you get dominant birds, maybe dominant marsupials - who can say?

Hallucigenia as illustrated by Marianne Collins in Wonderful Life by Stephen Jay Gould (figure 3.34)

More broadly, Gould draws on research on the Burgess Shale to argue “any replay of the tape would lead evolution down a pathway radically different from the road actually taken.”(p51) His basic argument was that fossil discoveries from the early history of life in the Burgess Shale revealed a startling diverse population of body plans, including many forms that are not present today. The strange Hallucigenia illustrated here is one such example. To Gould, this demonstrates the space of possible lifeforms is vast, much vaster than we may think from looking at the animal world today. Gould goes on to argue major extinction events arbitrarily and capriciously eliminate vast swaths of the landscape of possible lifeforms, bottlenecking evolution to proceed from a limited menu of options. Rerun the tape and if different chunks of the landscape get knocked out by different global catastrophes, things might have been very different. This is not unlike the impact of global crises on the selection of different technologies, discussed above.

Taken together, we have a strong case for radical contingency in the history of technology. When we actually can try to “replay the tape” of technological history, in however small a way, we immediately begin to notice differences. And if we were able to let the tape play for longer, it seems likely these differences could become quite large. This is for a few reasons. First, there is a big space of possibility upon which divergence can play out. Second, there are two layers of contingency that affect which options from the set of technological possibility are actually realized. The first layer is what we actually discover or invent in the first place. While we probably have some ability to direct our R&D efforts to topics most likely to yield high value, our ability to forecast which topics will yield hits is probably fairly limited. The second layer is the selection of which discoveries and inventions get taken up by society and become the focus of persistent follow-on research effort. That process too is beset by the contingency of world events and social contagion dynamics. And yet, despite this contingency, we seem to commit to develop the selected technology with quite a lot of persistence.

How different could things be? So far, we see no reason not to believe the sky’s the limit. However, in the second (shorter) section of this essay, we’ll look at a few arguments for why this process probably does, in fact, have some signifiant limits.

Part Two: The Limits of Path Dependence

So far, we’ve argued that, like the history of life, the history of technology is characterized by a gigantic space of possibility and that at least in some ways our approach to exploring this space is evolutionary. Gould believed the history of life was rife with wild contingency. Should we draw the same conclusion about the history of technology?

Not necessarily. In fact, Gould’s position is hardly an unchallenged consensus. Indeed, Simon Conway Morris, one of the major figures behind the Burgess Shale research upon which Gould bases his argument, challenged Gould’s interpretation of his research in a follow-up book The Crucible of Creation. Conway Morris writes (p139):

…at the heart of Wonderful Life are Gould’s deliberations on the roles of contingencies in evolution. Rather than denying their operation—and that would be futile—it is more important to decide whether a myriad of possible evolutionary pathways, all dogged by the tests and turns of historical circumstances, will end up with wildly different alternative worlds. In fact the constraints we see on evolution suggests that underlying the apparent riot of forms there is an interesting predictability. This suggests that the role of contingency in individual history has little bearing on the likelihood of the emergence of a particular biological property.

Hallucigenia as illustrated in Figure 19 of The Crucible of Creation

For Conway Morris, research that postdates Gould’s book tempers his conclusion about how many varieties of life actually are documented in the Burgess Shale. For one simple illustration of their differences, consider this illustration of the same Hallucinogenia depicted above. Where Gould saw an alien walking on spiky stilts, subsequent research sees a more familiar animal protected by spikes along its back. More broadly, Conway Morris argues that though there may be an enormous array of possibilities for life, most of the possibilities are bad and life has many paths to the relatively small number that are good. This is an argument that has continued to find supporters up through the present day.21 I take two things away from this debate:

  • It is not obvious to biologists that the evolution of life would have delivered very different life forms, if we replayed the tape of evolutionary history

  • If this is so, it is because the vast space of possible life forms is only sparsely populated with evolutionarily fit life forms, and there are many paths to these niches

An analogous argument can be made quite forcefully for science. Ultimately, all scientists are interrogating the same world, and this would also be the case if we replayed the tape of technological progress. As best we can tell, this world seems to be governed by a small set of universal laws. Moveover, it seems there are probably many different paths to discovering these same laws. Even if scientists only occasionally independently embark on exactly the same research project, it may be that many different research roads lead to the same destination. It is hard to imagine we would not eventually discover variously fundamental laws of the universe, though the exact sequence of experiments and theories that pulls us into their orbit may differ dramatically.

We could make a similar but more speculative claim about technology: although the space of technological possibility is vast, it may be that most of the combinations that are possible are just not very good. If that is so, then an evolutionary algorithm for technology might always end up directing us towards technologies that fundamentally rely on a core suite of regularities in nature which are themselves nearly always discovered (eventually): electricity, combustion, atomic power, etc.

Innovation is not (just) evolution

But there is a further reason why path dependency in technology must have limits: technological progress is not purely an evolutionary process. A key difference between biological evolution and human innovation is that the latter is a purposeful activity undertaken by reasoning minds. And human minds are not just trying things out at random, like the mutations of biological evolution.

We’ve already encountered this idea in our discussion of multiple independent discovery. While simultaneous independent discovery is fairly rare on the whole, I’ve argued it’s more common for inventions and discoveries that are judged as likely to be important ahead of time. In other words, people have some ability to scan the horizon and identify promising leads. And frequently, many people identify the same lead.

How do people make predictions about what technologies or discoveries will be big though? One way is by using models of how the world works: one such model is the one science gives us.

Indeed, science seems particularly useful for helping people explore areas much farther from the path of the current technological trajectory than might otherwise be wise.22 For example, when inventors venture into a new technological domain, one where they have not previously invented, their inventions tend to be more valuable when they are guided by science (as evidenced by their citation of scientific papers on the invention’s patents). And more directly relevant to our discussion of evolutionary strategies for invention, the patents that lie the farthest from existing patents - that is, the ones that represent the biggest evolutionary “leaps” - seem to be more reliant on science, as measured both by their propensity to cite scientific papers or be invented by people with a university affiliation. Lastly, it seems to be easier to convince others of the value of strange and highly novel patents when those patents have a more scientific underpinning.

In all these ways, it looks like science gives us a partial map of the vast technological landscape. This map might be incomplete and low resolution, but it seems to be a lot better than anything biological evolution has to work with. That implies it should be easier for multiple independent runs of technological history to coalesce around the same, best, technologies, as compared to biological evolution. Biology is groping in the dark; invention has a map and a flashlight.

(As an aside, the growth of science may also present a counterbalancing check to the increasing path dependency envisioned by Weitzman; recall, he argued that as time goes on the space of untried combinations grows much more quickly than the economy, and hence the share of the technological landscape we explore shrinks. If science provides a map of that terrain, the expansion of science might mean we are able to survey more and more of that landscape, even if we do not choose to invent most of the combinations therein. If science and our ability to use it to map the terrain of the possible grows fast enough, path dependency could actually weaken over time)

Scientific models of how the world works aren’t the only way forward-looking reasoning inventors have an edge over biological innovation. When biological life forms discover a new trick - lungs for breathing air, wings for flight, etc. - that invention is not available to other life forms unless they independently stumble upon it in the course of their own evolution. In contrast, inventors are able to observe the technological solutions to all sorts of problems and cherry-pick the best ones from any context, so long as they can find a way to adapt it to their own needs. This adaptation of solutions found elsewhere is another way that exploration of the technological landscape is supercharged, relative to biological evolution.

As noted earlier, we call the adaptation of ideas developed elsewhere a knowledge spillover, and these appear to be ubiquitous in innovation.23 That means that in the hunt for the best possible technologies to achieve some end, inventors have access to a lot more than incremental tweaks on the current frontier technology. If you’re an inventor, you can scan the horizon broadly and make huge leaps in the space of technological possibility by adapting solutions discovered by other inventors who are perhaps unconcerned and unaware of your own needs. Once again, this makes it more likely that different runs of history might find their way to the same technological solutions to problems.

Another way that knowledge spillovers can lead to reduced path dependency is because they allow a given innovation to emerge from many different possible starting points, and still diffuse widely. To illustrate, let’s suppose there are 100 different technologies that could each benefit from the invention of a next generation battery: electric cars, drones, laptops, etc. Suppose each of these sectors has the motivation to discovery improved batteries, but the batteries are tricky to invent; each sector has only has a 3% probability of actually discovering them.

In the absence of knowledge spillovers, we would expect roughly 3 out of the 100 to independently discover next generation batteries. More importantly, every time we replayed the tape of technological history, it would probably be a quite different set of 3 that discovered these better batteries (and indeed, more or less than 3 would typically discover them in different runs of history). In other words, there would be a lot of contingency in history: most runs look quite different.

With knowledge spillovers, things change a lot. Now suppose that if any of the 100 sectors develops next generation batteries, it is obvious to the remaining 99 that there own sector could benefit from adopting the better batteries. For all 100 sectors to obtain next generation batteries, we now just need 1 of the sectors to discover them. If each sector has a 3% chance of success, the probability at least one succeeds is 95%. That is, in most runs of history, someone discovers next generation batteries, and then everyone adopts them. Now, most runs of history end up in the same place, though they will differ in how they get there.

Finally, compared to the blind processes of evolution, being a forward-looking reasoning agent also helps humanity escape technological dead-ends. Across a host of domains, we have evidence that science and innovation gets harder, in the sense that it seems to take more and more effort to eke out the same proportional gains.24 Human inventors can recognize that further progress in a very mature technological paradigm will never be very large and can pivot to working in new and more promising domains.

In this sense, technological progress has off-ramps that biological evolution may lack. If we reran history, we might walk down some paths that we never tried in our own history. But if those paths terminate in dead-ends, as most do, forward-looking inventors could abandon them and switch to alternatives that looks more promising. That means, if we missed a promising technology trajectory we might come back to it in the future, if we discover the path we did choose didn’t go where we expected.

Exponential Growth and the Limits of the Possible

The preceding argument suggested three ways that human inventors can survey a much larger set of technological possibilities than a purely evolutionary approach would allow. Rather than just trying stuff at random and seeing how well it works out, human inventors can use science (or reason more generally) to quickly eliminate strategies doomed to failure and to zero in on approaches that might just work. Human inventors can also borrow from the entire package of human technology, drawing on knowledge spillovers from entirely different domains. And humans can terminate further investment in paths that aren’t working well and move into different domains that show more promise.

If we combine this insight with another, we can get an interesting result. As noted earlier, one argument against radical contingency in the history of life is that even though there might be an astronomical number of different possible lifeforms, most of those ways don’t “work” very well. It turns out that a similar view about the value of technologies is one possible explanation for why economic growth has been exponential for so long.

To see the argument, assume new technologies are indeed derived from combinations of pre-existing technological components. This implies that as the number of components available for combination increases, the number of possible combinations grows explosively. Specifically, the number of possible technologies grows much at a much faster than exponential rate.

That’s kind of weird, because economists believe economic growth ultimately derives from technological progress and yet economic growth has been steady and exponential for a long-time, rather than exploding at a faster than exponential rate. So there must be some disconnect between the number of possible technologies and the growth rate, since the two do not grow at parallel rates.25

One possible resolution to this disconnect is that finding a better idea gets explosively harder as the number of possible ideas grows. It turns out a quite natural assumption implies this: if you assume the productivity of technologies is distributed according to a boring old normal distribution (or any other thin-tailed distribution), then the math works out so that the difficulty of finding better ideas gets harder just fast enough to offset the growth in the number of possible technologies implied by combinatorial innovation! For this result to work though, human inventors need to be able to actually consider all the possible inventions implied by combinatorial models of innovation. Perhaps science, reason, and our knowledge of spillovers let us do that?

This doesn’t prove we live in such a world. But if it is meaningful to think of the productivity of technologies being distributed according to some mathematical distribution, then the normal distribution would seem to be a pretty good candidate for that distribution. It’s the kind of thing that emerges from the averaging of many different kinds of processes, for example. And this theory is also consistent with the fact that (1) innovation seems to be combinatorial, (2) yet economic growth is only exponential, and (3) we seem to have some capacity for considering a very wide array of technological possibilities. If this model is right, it implies technological progress may become increasingly less path dependent over time, since a smaller and smaller share of technological possibilities are desirable as we progress.

Strong Path Dependency Within Limits

So to summarize this long journey that we’ve been on, in part one we argued:

  • Small scale versions of replaying the technology tape point to path dependency being at least big enough to notice

  • The landscape of possible technologies is probably very big because

    • Combinatorial landscapes are very big

    • Technology seems to have an important combinatorial element

  • Our exploration of this space seems a bit haphazard and incomplete

  • From the constrained set of research and invention options actually discovered, an even smaller share get an early lead, often for highly contingent reasons, and then enjoy persistent rich-get-richer effects from follow-on research

In part two, we tempered the case for path dependency by pointing out

  • It may not matter that the landscape of technological possibility is large, if the useful bits of it are small. This may be plausible because

    • This might be the case for biology

    • It is probably possible to discover the small set of universal regularities in nature via many paths

  • Human inventors can survey the space of technological possibility to a much greater degree than in biological evolution

  • A shrinking share of better technologies combined with our ability to survey the growing combinatorial landscape can yield exponential growth in some models

At the outset of this essay, I listed three motivations I had for writing it. First, this kind of speculative exercise is inherently interesting to some, and is an interesting way to think about what factors drive technological progress. Second, it has a direct bearing on whether policies to direct and steer technological progress are feasible. Third, it is related to the question of whether an expansion of R&D would result in genuinely new stuff or just end up duplicating existing R&D efforts. I’ll wrap up by returning to those three motivations.

For the readers who think this kind of speculation is inherently interesting, I think if we replayed the tape of human history, we would find that the sequence, timing, and (sometimes significant) details of inventions could be quite different, but that the main technological paradigms we discovered would also be discovered there. We would find steam power, electricity, plastics, and digital computers. But we wouldn’t find qwerty keyboards; we might not find keyboards at all. It’s tough to quantify this kind of thing in any meaningful way, and of course we can never know for sure, but my suspicion is that the technology of an alternate history of humans would look about as different from our own as the flora and fauna of Central Asia look from the flora and fauna of the central USA.

For the readers who are interested in the feasibility of technology steering policy, my conclusion is, yes, it is possible to have a very large impact on the trajectory of a technology, at least over the time horizons that matter to most people working on these issues. To take one concrete example, suppose the oil shocks of the 1970s were the decisive factor in triggering R&D related to renewable energy. What if those shocks had come twenty years sooner and we had begun to invest seriously in renewable energy in the 1950s; would renewable energy have advanced so far by today that the world be in a much better place today, in terms of its capacity to deploy carbon-free energy? Alternatively, suppose the shocks arrived twenty years later, in the 1990s? Would the future for climate policy look much more austere than in our own world, because the state of renewable energy technology in that world is so much farther behind? In every version of the world it seems likely we would eventually discover forms of renewable energy, but the timing matters a lot and has very large implications for social welfare. I think the scale of the different outcomes implied by this thought experiment are on the same order of magnitude as the potential impacts of technology policy.

Lastly, on the question of whether there are more ideas left to explore, I think the answer is yes, there is much more out there that we are not investigating. There seems to be little evidence scientists and inventors are anywhere close to merely duplicating each other’s discoveries or fully exploring the space of possible technologies. We also have little evidence that we are good at easily identifying the strong ideas, so that the ones untried are quite likely to be unimportant. And the value of the best new ideas could be quite high - the ideas in our constrained little set lay down the tracks for lots of subsequent elaboration and development, and the best ideas can be adapted for myriad contexts that are hard to foresee at the outset.

Note: this essay is continuously updated as relevant articles are added to New Things Under the Sun. To keep abreast of updates, subscribe to the site newsletter.


Outline

Part one: The Case for Strong Path Dependency

Empirical Evidence

Contingency in science

Importing Knowledge

Are ideas getting harder to find because of the burden of knowledge?

The Landscape of Technological Possibility

Does Technology Really Work This Way?

The Best New Ideas Combine Disparate Old Ideas

Upstream Patenting Predicts Downstream Patenting

More science leads to more innovation

Ripples in the River of Knowledge

Necessary but not Sufficient

A Shortage of Exploration

How common is independent discovery?

Forecasting Hits

Knowledge spillovers are a big deal

Gender and what gets researched

The Evolution of Technology

“Patent Stocks” and technological inertia

Progress in Programming as Evolution

Standard Evidence for Learning Curves isn’t Good Enough

Learning curves are tough to use

When extreme necessity is the mother of invention

Measuring Knowledge Spillovers: The Trouble with Patent Citations

Other Reasons for Technological Inertia

Medicine and the Limits of Market Driven Innovation

Pulling more fuel efficient cars into existence

Building a new research field

Picking Winners

When extreme necessity is the mother of invention

An Interlude on Evolution and Path Dependence

Part Two: The Limits of Path Dependence

Innovation is Not (Just) Evolution

Science as a map of unfamiliar terrain

Knowledge spillovers are a big deal

Innovation (mostly) gets harder 

Science is getting harder 

Is technological progress slowing? The case of American agriculture

Exponential Growth and the Limits of Path Dependency

Combinatorial innovation and technological progress in the very long run

Strong Path Dependency Within Limits


Cited Works

Gould, Stephen Jay. 1989. Wonderful Life: The Burgess Shale and the Nature of History. W.W. Norton and Company.

Arthur, Brian. 2009. The Nature of Technology: What It Is and How It Evolves. Free Press.

Weitzman, Martin L. 1998. Recombinant Growth. Quarterly Journal of Economics 113(2): 331-360. https://doi.org/10.1162/003355398555595

Salganik, Matthew J., Peter Sheridan Dodds, and Duncan J. Watts. 2006. Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market. Science 311(5762): 854-856. https://doi.org/10.1126/science.1121066

Conway Morris, Simon. 1998. The Crucible of Creation: The Burgess Shale and the Rise of Animals. Oxford University Press.

Orgogozo, Virginie. 2015. Replaying the Tape of Life in the 21st Century. Interface Focus 5: 20150057. http://dx.doi.org/10.1098/rsfs.2015.0057

Comments
2
?
Juan Mateos-Garcia:

The development of systems that allow cheap experimentation e.g. predictive technologies and simulation tools would have a similar impact.

?
Charles Yang:

Super interesting synthesis! I shared some examples of real-world path dependency in quantum computing based on my time at ARPA-E: https://charlesyang.substack.com/p/real-life-examples-of-path-criticality