When I wrote my post on technological stagnation, the top question I got asked was: So, how do we fix it?
I don’t have the definitive answer, but here’s a starting point. I generally think about the causes of progress on three levels:
Correspondingly, my top three hypotheses for technological stagnation are:
(These are complementary, not mutually exclusive. Incidentally, this is pretty much the same set of factors identified by J. Storrs Hall in Where Is My Flying Car?, which is part of why the book resonated with me so much.)
Inverting these (and changing the order), here are three broad approaches to accelerate progress:
In particular, create a culture that recognizes progress and appreciates it. Some ways to do this:
In particular, provide more decentralized, distributed, heterogenous sources for research funding. Some interesting proposals and experiments along these lines:
Some examples of the problem:
I don’t know how to drive solutions to these problems, but folks at places like the Mercatus Center and the Center for Growth and Opportunity are working on it. (And maybe part of the solution is to create “special economic zones” as charter cities.)
To condense these ideas even further into a pithy formulation, you could call them the three F’s: Progress needs founders, funders, and freedom. By “founders”, I include entrepreneurs who found startups or nonprofits, scientists who found new fields or subfields, and inventors who found new technologies.
These are ways to address stagnation and accelerate progress at a broad level, society-wide. But let me close with a note to anyone in science, engineering or business who has a vision for a specific way to make progress in a particular domain—whether anti-aging, space, energy, or anything else. My message is: Just go for it. Don’t let the funding environment, the regulatory environment, or the culture stop you. Work around barriers or break through them, whatever it takes. The future is counting on you.
Celebrate progress. Maybe parades and fireworks are outdated now, but where, for instance, is the acclaim given to the BioNTech founders? Why aren’t they cultural heroes on the level of Jonas Salk?
You do have German mainstream newspapers writing stories like "In zehn Monaten zum Superhelden" (In ten months to superhero) with a picture of Ugur Sahin who funded BioNTech
Great! Merkel made a good statement also. But the praise should be coming from world leaders everywhere.
I would expect US leaders to focus more praise US scientists and German leaders putting more praise on German scientists.
It's not random that all the schools that are named after Jonas Salk are in the US (and I wouldn't have recognized the name).
Since the 70s, most developed nations have shifted from a focus on redistribution to regulation and the results have no doubt stifled many promising technological advancements. Unfortunately, it isn't clear to me how this trend can be undone. Rather than unwind regulations, governments still seem hellbent on expanding them. Not even furniture is safe from unneeded and stifling red tape.
These posts, and your whole blog, seem predicated on the assumption that progress is inherently desirable as an end in itself. Since this post is my first exposure to your work, it seems better to ask you where you explain that assumption, rather than attempting to dig through everything else you've ever written and find it myself. I would like to better understand what exactly you mean by "progress", because the systems and beliefs that I attach that term to are not something that it's obviously good to maximize for its own sake.
My resistance to the implication that constant progress would be inherently good is shaped in part like this: For an innovation to improve the world, it has to meet a certain baseline of "try not to make things worse than they were before". When everything is really "bad", by whatever standard of quality you happen to choose for the exercise, there are a vast scope of possible innovations that will have an end state that's "better" than the starting conditions. But from a starting condition that's actually pretty "good", in the sense that almost everyone is fed and housed and clothed and entertained with more options and excess than any of their ancestors enjoyed, fewer of those same possible innovations will pass the same test of "do the likely improvements to society outweigh the possible detriments of its drawbacks?".
So it seems to me that for each time "progress" succeeds at raising society's expectations in a particular area, it inherently makes future progress in that area more difficult by imposing additional constraints for future innovations. For instance, a rolling car just has to outperform a horse-drawn carriage by a certain amount to be worth adopting; they hypothetical flying car would have to outperform the rolling car rather than the carriage for it to improve society. The early electrical grid had to beat oil lamps that smoked up the wallpaper and stank and had to be lit by hand, so burning down houses occasionally was not substantially worse than the former status quo; the home nuclear reactor now has to outperform a power distribution system in which having electricity does not increase the risk of a small nuclear accident at your house.
This line of reasoning does have the problem that it implies an asymptote of "good enough" at which further innovations are entirely unneeded, which is so inconsistent with my understanding of humans and society that I suspect I've probably missed something.
Here are some introductory posts that explain why progress matters:
I agree that the bar keeps getting raised, and therefore progress gets more difficult. I don't see why that implies any asymptote. (I wrote in a previous post why exponential growth should be our baseline, even as we pick off low-hanging fruit.)
The lights that most people use today have the same color all day. That results in people getting too much blue light at night which is bad for sleep. They are also too low in intensity so that people get depressed in the winter.
A more modern solution switches to red light in the evening which results in better sleep and gives generally more brightness during the day. Lightbulb also have to change a lot less then previously.
There are many problems that can be solved with result in increased quality of life.
The average American spends 6 hours per week cleaning. If that would be automated by robots it would free up 6 hours that can be spent on more enjoyful activities. Besides time-investment there's also a lot of interpersonal conflict that comes when things are not as clean as people who cohabit want.
Fast, cheap, nonpoluting and relatively silent robotaxis could increase life quality in a city.
Can't speak for Jason but maybe I can change your mind. IMO, the a case for progress can be made pretty simply to anyone who cares about the welfare of people living today and their future welfare. Assuming that, I'll make two observations.
"Try not to make things worse than they were before" has been a classic argument made against nuclear power for decades. Maybe with fewer regulatory constraints, nuclear power could provide more energy for a lower cost than fossil fuels, but is it really worth the tail risk of nuclear proliferation or catastrophic meltdowns? Society decided it wasn't and now we find ourselves in a slow motion global catastrophe while "good enough" living standards remain fundamentally out of reach for most of the world. Perhaps renewables will save us from this mess, but surely not if technological progress were to end today.
That's not to say that all technological progress is good. Asbestos having some really cool insulating properties doesn't mean it was a net benefit. But technological progress in general is desirable if you wish to avoid present "good enough" living conditions from deteriorating and want to make such standards attainable for the whole world.
Technological progress isn't just a chance to do more, it's also often a chance to pivot from one resource to another so as to avoid depletion. Ultimately, freezing it won't insulate present society from the risk of things getting worse. On the contrary, halting progress condemns our society to a slow rot.
Honestly I think globalism is ending it all on its own. Free trade has had the side effect of creating multiple economic powers in competition. China, USA, EU, taiwan, japan, korea - multiple powers. And we don't need every technology to move forward to end stagnation. We need just one, a form of AI that can accelerate R&D in itself.
And then we would very rapidly find a world where at least one power has self replicating robotics swarms, each producing design optimized products including more of themselves and of course weapons. And any other power, if they don't want to eventually be defeated will have to invest in the same technology. (At a certain level of scale self replicating robotics will trump nuclear weapons because you could mass manufacture sufficient defenses and bunkers to fight a nuclear war and win)
Typically those who subscribe to the premise of The Great Stagnation have a dim perception of progress in the world of bits. The argument goes that digital progress is overhyped in the present day because it's one of few, if not the only industry still exhibiting fast progress.
Zooming out to the wider economy, GDP growth rates have slowed enormously in the developed world since the early 70s and total factor productivity growth has declined to rates possibly not seen since the dawn of the industrial revolution. If these measurements are indeed still a reasonable metric of technological progress in the digital era then advancements in computing have not managed to match midcentury levels of progress.
If you accept this analysis but still bank on AI as the solution moving forward, that means accepting the notion that computing progress hasn't lived up to expectations in the past but will lead to tremendous advancements just on the horizon. Given how many times AI has been hyped only to fall into a stagnant winter soon after, I'm not so sure about this.
Also I don't have any reason to accept "bankunderground" as a credible and accurate measurement of progress. Especially as I can just "look out the window" and see that somehow this means all of China's absurd growth in per citizen productivity just...doesn't show up in the data. Huh.
The data pertains to Britain, not the developing world and the data comes from The Bank of England. Obviously, tremendous economic progress has been made globally since the 70s. But that economic progress is mostly "catching-up" aka developing countries adopting existing technologies. Far less development has happened on the frontier.
Oh. Well focusing on just britain is meaningless. Why not focus on cuba. Or one household down your block. Point is a "small" country can easily do mediocre for any number of reasons while absurd runaway progress can happen elsewhere.
The same trend can be found in every country which was developed by the 70s. Britain is simply a particularly good example because of the amount of record keeping they performed in the 18th and 19th centuries compared to other countries.
However, just looking at data from the 20th century onward, accessible for any developed country, growth at the technological frontier has slowed tremendously since the 70s. Jason Crawford already aggregated a lot of data pertaining to the slowdown here. Long story short, not much technological progress has been made outside of computing in decades.
I thought about this problem a bit more, and let's drop speculation about what may or may not be possible in the future. And just talk about specific professions over the last 50 years.
Anyways we can go down the list and find a long list of jobs that have changed minimally if at all. And therefore the productivity per worker cannot be expected to improve if the amount of human labor needed hasn't shrunk. Some tasks, like nuclear plant worker, over time they have become less productive as more of them are needed per megawatt of power, due to more and more long tail risks being discovered.
And then you can talk about how you might get a meaningful increase in productivity from each of these roles. And, well, it's all coming up AI. I know of no other way. You must build an automated system able to perform most of the task. Some (like schoolteacher) are nearly impossible, others like janitor are surprisingly hard, and some are being automated as we speak. (Amazon Go for retail clerks)
Some tasks, like nuclear plant worker, over time they have become less productive as more of them are needed per megawatt of power, due to more and more long tail risks being discovered.
It's not just about discovering more tail risks but about having a different culture on risk in those companies. One example someone from the industry gave me is that they tell their workers in yearly seminars about how to avoid cutting themselves with paper.
Right. So this is one of those anti-progress patterns I see around. What happens internally to the company is that over the Very Serious People create some Very Serious Internal Processes like There Shall be Risk Management Training (on the prevention of papercuts). And anyone suggesting that maybe they could run the company more efficiently by skipping this training has to argue either (1) elders in the company were wrong to institute such training or (2) they (personally) are pro risk.
Hard to be "pro risk" in the same way if you spoke against, say, diversity quotas by definition you are for discrimination. So over time the company adds more and more cruft - while not really deleting much - making it less and less economically efficient. This is why big rich companies have to constantly be buying startups for the technology, because they are unable to get their own engineers to develop the same tech (because those engineers are too busy/beat down by mandatory training). And why eventually most big rich companies fail, and their assets and technology get bought up by younger companies who, when the merger goes well, basically throw out the legacy trash. (except when the opposite happens, like in the boeing mcdonnell douglas merger)
restaurant waitstaff, cook - totally unchanged
Ordering at MacDonalds is very different then it was in the past. You can now both order and pay digitally.
For cooks Googling finds https://magazine.rca.asn.au/kitchen-innovations/ . According to it there are various innovations in commencial kitchens like induction cooking.
Sure. Point is that this lets you go from 10 workers in a restaurant to 9.5 or other small increments. It's not like the innovation of the tractor and fertilizer and other innovations, which have reduced farmers from 50% of the population (1900) to 2% (today).
To get this with restaurants the only way is intelligent robotics, at least the only way I can see. Other than just "everyone stops eating restaurant food and starts eating homogenous soylent packets." We could automate that fully with today's tech. Where today a restaurant with 10 workers gets replaced with 0.4 workers, who work offsite, and respond to elevated customer servers calls and elevated maintenance issues. ('elevated' means the autonomy tried and failed to solve the issue already. While automated maintenance isn't too common, Amazon is experimenting with automated customer service, where in my experience a bot will basically just give a refund if you have any complaint at all about an order)
Ok, so first you aren't talking about progress really, you are linking data on productivity per worker. Which has gone up over the decades but at a slower pace. Why is that?
Well, the simplest theory is that suppose there is a class of tasks that are easy to automate, a second class that is hard but feasible to automate with simple computers, and a set of modestly complex tasks with hundreds of thousands of edge cases.
Well, today, almost none of the improvements in AI you have read about are being used where it counts, in factories and warehouses and mines and to control trucks. This is for several reasons, the biggest one being that for a "small" niche market it isn't currently worth the engineering investment, the money is going into autonomous cars, and those aren't finished, either.
So set [A] got automated in the 1970s. Set [B] gets automated slowly but only where the demand is extremely high for a product using this method, and where the cost of the automation is less than paying thousands of chinese factory workers instead. (they have gotten more expensive). Set [C] is all done by humans, but over time small tricks have reduced how many humans are required.
So that would explain the observation.
TFP doesn't mean productivity per worker. It's designed to identify economic progress which can't be attributed to increases in labor or capital intensification aka technological progress applied to make an economy more efficient. Advances in automation should be captured under such a measurement.
You are saying "improvements in output not accomplished by spending more real dollars in equipment or having more people working".
Hypothetically if we had sentient robots tommorow they would initially be priced extremely high, where the TCO over time of such a system is only slightly less than a worker. Are you positive your metric would correctly account for such a change? This would be a revolutionary improvement that would eventually change everything but in year 1 the new sentient robots are just doing existing jobs with less labor and very high capital costs
No it wouldn't. TFP is in a sense, a lagging indicator. It captures economic benefits of technological progress but does not evaluate emerging technologies which have yet to make an economic imprint. That said, no AI I'm aware of that presently exists is remotely comparable to a human level AI. Level 5 self driving doesn't even exist yet and once the computational power used to power AI catches up with Moore's Law, the field seems due for a slowdown.
I think the most succinct argument I can make as to why I bank on "AI as the solution moving forward" is I just took a break from my day job. And without revealing any proprietary information, to do just basic tasks and analyze video frames in real time, at low resolution, for objects of interest (aka resnet-50, etc) requires teraflops. We trivially talk about how many "TOPs" a given workload is, aka trillion operations per second. And it takes hundreds to do anything semi-good enough to be useful.
That simply didn't exist during past hype days of AI. It was flat out impossible. There was no future world where the researchers looking into AI in the 1960s could have gotten the results we get today. (which I am sure you will point out are still mediocre, autonomous cars now drive themselves but only when all the conditions are just right). Or in 2010.
So I am just going to take any past hype as the 'rantings of a uncredible madman', regardless of which Ivy League lab they were working out of, and go with recent results as my barometer for when AI is going to really takeoff. Which are getting rather good.
Anyways the other piece of this is all your trends you are linking are observing a process where:
a. technology keeps getting more complicated
b. the human beings trying to improve it are not getting smarter very fast (if at all), and they live finite lives.
So it's perfectly reasonable for real progress to slow over time, except in fields where the technology can help you develop itself, which in some domains of computers has clearly happened. (succinct example: frameworks have made highly sophisticated apps and websites, that would have required an entire studio months of effort 20 years ago to create, doable by one person in a week).
No doubt very significant advances in AI have occurred within the past decade or so. AlphaFold practically "solving" the problem of protein folding for example, is a hopeful glitter of technological progress and the promise of artificial intelligence.
However, it remains an open question how far AI will advance before it runs out of track. Because does appear to be approaching a wall. OpenAI observes that the rate at which added computational power is being supplied to create ever more advanced AI models is far outstripping Moore's Law. It doubles every 3.4 months. This can't be sustained for much longer.
Meanwhile, many of the advancements in the quality of the actual algorithms utilized seem to be ephemeral. Numerous studies have discovered that many of the actual models used by AI today aren't objectively better than those which already existed years ago.
Given that progress in the quality of models seems to be progressing relatively slowly and the brute force method of adding more computational power isn't sustainable, another AI winter is well within the cards.
To drag civilization out of a technological stagnation, AI doesn't need to reach human levels, but it also needs to be able to do much more than it can today. Enabling level 5 autonomous vehicles would probably be a feat on the same scale as the triumphs of the 20th century, but so far AI has continued to fail to deliver complete self driving and it isn't guaranteed that it manages to before hitting the winter.
Solving protein folding and related problems might be enough to create a new era of progress. Look at the issues we have with vaccines. If we could run computer simulations that tell us what kind of antibodies the body creates when given different antigens we would gain a lot in vaccine design. That means we could make a lot of progress on a universal flu vaccine and an AIDS vaccine.
Designing protein to catalyse various chemical progresses is also a huge win.
If AlphaFold 2 is as accurate as its creators have claimed it undoubtedly represents an enormous technical leap. However, it remains to be seen how regulatory constraints and IP laws will erode its value. mRNA vaccines also represent a huge advancement, but were it not for COVID-19 lowering normal regulatory barriers (and providing a deluge of capital), they would still be a decade away, despite most of the fundamental technology already being ready.
With or without advanced protein folding simulations, so long as we remain in an environment where it costs billions to get a novel medical treatment to mass market, there's little doubt the full potential of this breakthrough will not be realized anytime soon. The question is, how much progress will remain possible working within these constraints. I still expect it will aid in numerous future medical breakthroughs but I dunno about it unilaterally ushering in a new era of progress.
I don't think the regulatory barrier against mRNA vaccines were completely unreasonable. If we for example at a recent paper that describes the problems with PEGylated anything (including mRNA) from 2019:
The administration of PEGylated drugs can lead to the production of anti-PEG antibodies(anti-PEG immunoglobulin M (IgM)) and immune response (Figure 1) . Due to these phenomena,the PEG-conjugation of drugs/NPs often only provides a biological advantage during the first dose of a treatment course. By the second dose, the PEGylated agents have been recognized by themononuclear phagocyte system in the spleen and liver and are rapidly cleared from circulation.
We now have a way to deliever mRNA a few times per person to people but outside of pandemic conditions it's unclear why you want to make the few times that you can effectively give mRNA to a person a vaccine that likely could be made is well via established methods at the cost of maybe not being able to give the person later an mRNA cancer treatment because of too much PEG immunogenicity.
The side effects of the second dose of an mRNA vaccines we see currently are higher then the side effects we see from our well tested vaccine formulations.
I think globalization is actually detrimental to progress. In a globalized world, technological innovations spread around so quickly that whoever fronts the initial investment in capital and effort will be the sucker left standing in the rain. China's entire success story is based on this.
You're looking at the fate of individuals (both people and companies). The overall system seems to be flourishing. China has made more growth in absolute terms, and very high numbers in relative terms, than any nation in history. You might notice how Amazon and Alibaba are now flooded with China giving back in the form of innovations. Yes, these products are frequently of questionable quality but even that is a form of experimentation. (how cheap can we make it and it still gets sales...)
Not necessarily. Globalization has had many negative second-order effects. For example: As much as air connectivity has helped us travel across the world, it has also increased the risk of infections travelling longer distances quicker than if we had a localism-based model. If we are epistemically humble enough, it is not difficult to see how many COVID-like events might have happened in the past in various isolated parts of the world, that we do not know of, but never ravaged the entire world.
Globalization has benefits, not saying that it is not useful, but describing progress as a function of globalization is what I take issue with. Progress is a multiscale phenomenon. You need a strong localism-based core for innovation and you also need decentralization to accentuate the process, and then you can use globalization to scale. And then there is also the part where you need a lot of wisdom to know what should be scaled and what shouldn't be.
I can't see any structured reasoning steps in your argument.
I stated that real global progress had been made. China has not "taken" business away from the USA in a sort of mercantile "zero sum game". They have taken a lot, the USA has gained some, and the global economy is bigger than ever.
At this point good faith has broken in this argument, we should stop.
The damage is irreversible; once the bureaucracy take on a life of its own the incentives are aligned to drive us down this spiral of madness. Without some drastic event like the creation of AGI or world war 3, the only way I see humanity coming out of this age of stagnation is starting over on a different planet. Sure colonizing Mars is hard, but dismantling a bureaucratic nightmare without violence is impossible (I'd love to learn about any historic examples to the contrary). That's what I believe Elon Musk really means when he says we must back up our civilization.