Is progress slowing down?
Those who favor the stagnation hypothesis point out that, other than computing, most fields have seen only modest, incremental improvements. Our cars, planes, factories, food supply, antibiotics, and so forth are just improved versions of what we had in, say, 1960.
The counter-argument is: Wait, what do you mean, “other than computing?” How can you just ignore the area where we have seen revolutionary progress? Saying, “other than orders of magnitude increases in capacity, performance, and cost; revolutionizing all of communications; connecting every human being on Earth; and putting all the world’s knowledge and culture in every pocket; what has technology done lately?” sounds a lot like “what have the Romans ever done for us?”
The rebuttal to the counter-argument is: Computing is just one area. We used to have revolutionary changes going on in multiple areas at once:
- 1870–1920 saw the the electric generator, electric motor, and light bulb; the telephone, wireless, phonograph, and film; the first automobiles and airplanes, and the assembly lines to build them; the first synthetic plastic (Bakelite); the Panama Canal; the Haber-Bosch process; and the germ theory, along with its applications to public health.
- 1920–1970 saw radio, television, radar, and the first computers; the invention of nylon and other plastics; the expansion of mass manufacturing with an explosion of consumer products; penicillin and the golden age of antibiotics; Norman Borlaug’s Green Revolution in agriculture; nuclear power; the interstate highway system, jet airplanes, and the Moon landing.
- 1970–2020 saw … the PC, Internet, smartphone, and GPS; and genetic engineering, including GMOs. It also saw the Apollo project ended, the first supersonic passenger jet launched and then canceled, the promise of nuclear power unfulfilled, a War on Cancer with lackluster results, and similarly modest progress against heart disease. Yes, there were many incremental improvements in a variety of areas that are easy to forget about, but that’s the point—there were fewer revolutionary breakthroughs.
Another way of putting this is that if you look at a 1970 living room vs. a modern one, not much has visibly changed—again, other than the computer (and related technology, such as the big flat screen TV that has replaced your clunky CRT).
If progress is slowing down, what’s causing it?
One hypothesis is that we ate all the low-hanging fruit. This is literally the subtitle of Tyler Cowen’s book on stagnation, and it was Scott Alexander’s conclusion as well.
On the other hand, when I brought up this idea on a panel discussion with Michael Nielsen, he rejected an oversimplified low-hanging fruit analysis, pointing out that when we discover new fields, such as computer science, they open up whole new orchards of low-hanging fruit. Or as he and Patrick Collison wrote in The Atlantic:
Suppose we think of science—the exploration of nature—as similar to the exploration of a new continent. In the early days, little is known. Explorers set out and discover major new features with ease. But gradually they fill in knowledge of the new continent. To make significant discoveries explorers must go to ever-more-remote areas, under ever-more-difficult conditions. Exploration gets harder. In this view, science is a limited frontier, requiring ever more effort to “fill in the map.” One day the map will be near complete, and science will largely be exhausted. In this view, any increase in the difficulty of discovery is intrinsic to the structure of scientific knowledge itself. …
But there’s a different point of view, a point of view in which science is an endless frontier, where there are always new phenomena to be discovered, and major new questions to be answered. …
… the optimistic view is that science is an endless frontier, and we will continue to discover and even create entirely new fields, with their own fundamental questions. If we see a slowing today, it is because science has remained too focused on established fields, where it’s becoming ever harder to make progress. We hope the future will see a more rapid proliferation of new fields, giving rise to major new questions. This is an opportunity for science to accelerate.
I think both of these questions can get confused if we don’t tease apart the S-curves.
Every technology, defined narrowly enough, goes through an S-curve: it starts out small, picks up steam, hits a hockey-stick inflection point, grows exponentially—and then starts to near saturation, slows down, levels off, plateaus. Electricity, for instance, went through an experimental/inventive phase for decades, grew rapidly starting in the 1880s, then leveled off in the early 20th century as power and lighting spread to the whole country. Today, with the power grid providing virtually universal coverage, electricity is not a high-growth industry.
Sustained growth over the long term, over centuries, comes from layering many S-curves on top of each other, something like this:
As one plateaus, investment and talent shift to new, more promising areas. If Edison and Westinghouse were young men today, they wouldn’t be working on electricity; they’d be hacking up cryptocurrency apps, or building self-driving cars, or tinkering with 3D printing. George Stephenson wouldn’t be building a locomotive named Rocket, but an actual rocket. Pasteur would be doing genetic engineering.
With this lens, we can bring the debates above into sharper focus. Rather than ask whether computing “counts” or whether we’re running out of low-hanging fruit, we can investigate, separately, the size and shape of individual S-curves, vs. the distribution of S-curves. We can ask questions like:
- What is the rate of discovering new S-curves? How has that changed over time?
- Are today’s S-curves, individually, developing faster or slower than before?
- Are the steep parts as steep as they used to be?
- Are they plateauing at a higher or lower level?
- What is the distribution of these characteristics, over the set of all S-curves?
I’m not sure how to begin investigating these questions, since progress is notoriously tricky to measure, and I’m not even sure these are exactly the right questions, or which of them are even well-defined. But I think this is the sort of analysis we need if we’re going to resolve the big issues, instead of having unproductive debates over living rooms and the height of fruit.
I've sometimes wondered if it's possible that computing has so much unclaimed low-hanging fruit that it's currently sucking up the majority of innovative brains, and that progress in other fields will resume to a certain extent once it becomes more difficult to make world-changing inventions in computing.
edit to add: this would line up with my experience that there is a massive undersupply in competent computer programmers relative to available opportunities
Elon Musk is perhaps another piece of evidence for this. Turns out spaceflight, vehicles, and perhaps tunneling and brain-machine interfaces too can all be revolutionized if you get the right team of people working on it. Instead of just saying Elon is amazing, we could say: There are lots of low-hanging fruit to be picked outside computing because computing has sucked up so much of the talent. Elon is good at finding those fruits and attracting talent to work on picking them.
Elon Musk is an interesting example (at least with SpaceX and Tesla) because neither of those companies are really developing new technology but (thanks to Musk's money) overcoming the local equilibriums that they were stuck in (a ULA monopoly and an internal combustion engine paradigm, respectively). There are certainly some smaller inventions and first real proofs of technology (eg. landing a rocket upright), but the core of the technology isn't new. Of course, this is a necessary step to advancement, but it still seems fundamentally different.
I'm not sure I agree. Propulsively landing rockets, especially orbital-class rockets, seems pretty freaking new and awesome. Making an electric car that is actually good... well, it doesn't require anything mind-bendingly new, but it requires a ton of small innovations adding up, many more small innovations than normally occur in product development cycles. As for money, Musk has less money than Bezos, for example, but it's SpaceX, not Blue Origin, that's revolutionizing the industry. And of course the established companies have way more money than either Musk or Bezos. I think really it's what I said it was: The ability to attract and motivate top talent.
Would you agree that if Starship gets working, then SpaceX will have developed new technology in the relevant sense?
I like the idea of comparing s-curves. I think it inherently runs into the same "low hanging-fruit" problem though - If the s-curves on computing are steeper, is it because we have faster communication so can propogate best practices quicker, or is it because there's more low hanging fruit, so the constraints we run into are easier to break through?
One other way to frame the problem would be to look at the progress along metrics humans care about - happiness, meaning, health, leisure time, etc.
If innovation is A. doing it's job, and B. increasing, we should be seeing those metrics increase faster than they used to. Of course, this runs into the same low-hanging fruit problem - Maybe industrializiation was a low hanging fruit, and it's harder to get increases in leisure time after.
It occurs to me that this is a fundamental problem - the rate of innovation depends both on how good you are at innovation, and how hard the innovation is. No matter how you measure the rate, you still need some way to tease apart those two variables.
I will remind people that the book Where is my Flying Car? has a good deal of interesting thoughts on this topic.
Might be relevant to your interests: Wardley Maps, especially the chapter I wasn't expecting that.
Good food for thought. Minor nit: the sources appear twice.
Oh that's weird. I thought they hadn't gotten pasted in properly, so I did them again. Must be an editor bug. Will fix, thanks