A guide to articles about various models of innovation, and evidence related to their utility.
This post provides a quick overview of claim articles in New Things Under the Sun that explore what, exactly, is this thing called innovation? In practice, that means these posts tend to be concerned with evaluating different models of the innovative process.
Can’t find what you’re looking for? The easiest thing is to just ask me: I’m happy to point you to the best article, if there is a relevant one.
Combinatorial innovation and technological progress in the very long run
The best new ideas combine disparate old ideas
Upstream patenting predicts downstream patenting
Ripples in the river of knowledge
Progress in Programming as Evolution
“Patent Stocks” and Technological Inertia
Learning curves are tough to use
Standard evidence for learning curves isn’t good enough
When technology goes bad
Age and the Nature of Innovation
How Common is Independent Invention?
Contingency and Science
Are ideas getting harder to find because of the burden of knowledge?
Knowledge spillovers are a big deal
Adjacent knowledge is useful
Combinatorial innovation is the notion that technological progress can be understood through the combination of pre-existing ideas or technologies.
This process can lead to sub-exponential growth until an explosion of new ideas takes place.
Some argue this explosion is a potential explanation for the industrial revolution.
Growth does not generally seem to be exploding, and some papers have proposed rationales for this. These could be related to:
limitations in resources for R&D
difficulties finding successively better ideas
cognitive limits
A number of studies have investigated the relationship between the originality of technology combinations and the success of resulting patents and academic papers.
The studies use different methods to measure originality and find a general consensus that patents and papers with unusual combinations of technology are more likely to be highly cited and successful.
Papers with highly atypical combinations of cited references were also more disruptive.
That said, there may be a bias against unusual combinations in the patent application process, and highly novel papers may be less likely to appear in high-impact journals.
The results suggest that an intermediate level of "conventionality" is best for both patents and academic papers, with a small number of atypical connections being the most successful.
The hierarchical relationships between different technology classes can be used to predict some technology trends
Some papers use US patent classifications as a way to observe these relationships, with the strength of the link between two classes being given by the probability of one citing the other.
These show that there is a statistically significant relationship between patent activity in upstream classes and the patenting of downstream classes.
This, in turn, lends some support to ways of thinking of technology as hierarchical assemblages of sub-components.
Most technological innovations are not directly based on scientific knowledge and principles.
Scientific knowledge can still be embodied in technologies, even if not directly though.
Two methods of measuring the relationship between science and technology are presented:
the citation distance from scientific paper to a patent
the hierarchical citation network.
The technologies closer to science tend to be upstream of those farther from science.
The studies described in the text examine the evolution of computer programs and programming languages.
A classic study by Arthur and Polak showed through simulations that technological advance can occur without human inventors and that technological evolution is lumpy, with key components leading to clusters of desirable circuits.
Studies of online programming contests (with real humans now!) find that the most common type of program submitted was a slightly altered version of the current leader, and that copying became the dominant strategy over time.
Applying an evolutionary lens to the history of computer programming languages allows one to create an evolutionary tree for programming languages, showing that innovation in programming languages is lumpy and that once established, programming languages tend to follow a path dependent pattern.
The concept of "patent stock" or "knowledge stock" is a statistical construct built from recent patent history.
Several studies find a positive correlation between last year's patent stock in a given technology and this year's patenting in the same technology.
This correlation could be driven by supply and demand, or R&D spending, or stickiness in the skills and expertise of the workforce, but efforts to control for these do not eliminate this technological inertia.
The post argues it is at least partially picking up the extent of know-how related to a given technology.
While patents are arguably a poor measure of technological know-how studies have found a weak but real correlation between patent stocks and "total factor productivity" (TFP).
Some studies try to measure using alternatives to pure patent counts and obtain similar results.
The post concludes by arguing knowledge begets more knowledge, but with diminishing returns.
Learning curves assume that the cost of producing a product decreases at a consistent rate with increase in cumulative production, due to the learning and experience gained from the production process.
Traditional evidence for the robustness of learning curves is not great, in my view. But there is still probably something to the idea.
Theoretical models of technologies based on biological metaphors can derive learning curves under certain assumptions.
Highly detailed evidence from an automobile plant also fit this model’s assumptions pretty well and supports the existence of learning curves
World War II approximates the ideal experimental test of learning curves, and provides additional support for them, though not as strong as standard evidence implies.
A"learning curve" states that every doubling of total experience in producing a product results in a consistent decline in per-unit production costs.
While there are many examples of strong correlations between cumulative production and price declines, it is possible that the causality is reversed and that it is price reductions that drive an increase in demand and production.
In real data, it can be difficult to differentiate the relative contribution of cumulative experience and time in reducing costs, as experience and time tend to be closely related.
Analysis of real data on many different products and technologies found that forecasting models based on either learning curves or constant annual progress perform similarly.
There are reasons to think future technologies may not have a benefit-cost ratio as high as in the past.
Models of economic growth where innovation sometimes results in death show that if the preference for more material goods is not too high, people may choose to stop growth when they are sufficiently wealthy.
In models where inventors can pursue safety-enhancing innovations, richer societies may increasingly focus on this kind of innovation, slowing conventional growth. There is some evidence rich countries are indeed making this decision.
Other models point out this doesn’t imply richer societies are safer, if technological progress also imposes danger.
Climate change illustrates many of these issues in a concrete way.
How do the discoveries of older and younger innovators differ?
Younger scientists are more likely to use recent ideas while older scientists are more likely to draw on a narrower, older set of ideas.
Younger scientists are also more likely to engage in “conceptual” breakthroughs, while older innovators are more likely to engage in “experimental” innovations (where experimental here means something specific and non-standard).
Nobel Prize-winning innovations tend to be clustered among the young for conceptual and theoretical breakthroughs, while experimental breakthroughs tend to be clustered among the old.
Younger scientists also tend to produce work that is more “disruptive” (by one measure of disruption).
A post about the prevalence of simultaneous discovery or reinvention in science and technology.
Various studies that have explored this phenomenon in different fields, such as structural biology, evolutionary medicine, patents, and surveys of scientists.
The studies suggest that the annual rate of simultaneous discovery is unlikely to be much higher than 2-3%, but the exact amount is difficult to determine.
Overall, the evidence suggests that important discoveries tend to attract more investigation and are more likely to result in multiple independent discoveries.
The post concludes by arguing if we can see an idea is going to be important, there is probably a good chance of multiple independent discovery, but otherwise discovery has little built-in redundancy.
This is about how much a particular scientist or group matters in discovering new knowledge.
Evidence from the history of simultaneous inventions suggests that redundancy is low for most details and ideas.
This doesn’t seem to be the case for some important ideas
Another issue with this argument is scientists may avoid working in areas where their rivals are active.
Studies that investigate the impact of an eminent life scientist's death on the scientific community suggest:
scientists do avoid working on the same topics as eminent colleagues.
But the nature discoveries does change following death, suggesting the same ideas are not necessarily always discovered.
Studies on the disruption of communication and collaboration due to geopolitical conflict also appear to lead to sustained divergences in science.
Perhaps innovation is getting harder because of the burden of knowledge: the idea that pushing the knowledge frontier requires progressively more knowledge as fields mature.
Evidence for the burden of knowledge shows up in a few places:
The age at which people achieve certain innovation milestones is rising
More and more innovation is conducted by larger and larger teams
People are specializing more and more
One study looked at what happened to mathematics when the USSR collapsed and in the West, different areas of math suddenly got different infusions of knowledge. The ones with a higher subsequent burden of knowledge ended up with bigger teams and more specialization.
This one is about knowledge spillovers, and how important they are.
The article presents several studies that suggest that if you omit knowledge spillovers you are probably missing half the story.
One study finds that the majority of novel concepts in the agricultural patents were already present in non-agricultural patents
Another finds that the Department of Energy's Small Business Innovation Research (SBIR) program leads to more innovation, or patents, in technology fields other than the one funded.
Studies of grants from the NIH find substantial spillovers in the basic science underlying different diseases
Large-scale studies of R&D by publicly traded firms indicate the returns to R&D, inclusive of spillovers, are twice as high as private returns.
Argues that knowledge which is close but distinct from existing knowledge tends to be the most useful for innovation, via a few studies:
Researchers randomized to meet other researchers at a conference were most likely to collaborate with those who have an intermediate level of knowledge similarity,
A larger observational study finds people are more likely to cite each other's work and start working on similar topics if they have that same intermediate level of knowledge similarity.
The sources of ideas for patented agricultural technologies again finds near but not identical sources to be unusually well represented
Temporary assignments in the automobile industry resulted in new ideas more often when the plants were similar to the existing plant