I appreciate the work that you are doing – you are asking the question of why did the idea of progress fell out of favor and I would like to add my two cents, looking at it just through one particular lens: human nature.
Let’s start with the progress itself – progress towards what telos? I assume most people mean utopia and folks over here at LW tend to envision in a similar way to Bostrom’s Letter from Utopia.
If that’s the case, maybe it did fell out of favour because technology hasn’t delivered on its utopian promise in a reasonable timeframe and progress by itself cannot ensure a good outcome for average folks - on the contrary, it makes them become cogs in the machine (hello Ted Kaczynski). Starting with Industrial Revolution, which promised to eliminate poverty and make everyone happy – these promises did not come to fruition even 200 years later, and especially for the folks living in the meantime.
What does this tell us about progress then? It means that human-centered technological progress is not self-catalyzing at a societal level – there is no moral imperative there, because progress haven’t proven itself. Sure, we get progress stemming from fear or nationalism (as it is a great promoter of technology), but these tech trees lead to material advancements, not qualia advancements per se – nor advancements in human nature, AKA applied morality. As an example, should an average Joe look forward to the longevity escape velocity and immortality if at the same time he would remain a cog in the machine for longer? The answer seems clear to me.
Sure, there are some individuals that see it as a moral imperative (hello folks), but most people, in the “Mind-at-large sense” society does not view it as a desirable telos and individuals go on to play their individualistic games.
Also, because I am sure it will lead to critique – many people will point out progress is happening, accelerating, etc. – sure, but progress towards what end? Will it lead to a median Joe having great time or rather a minority, or even machines? It simply doesn’t seem that it is centered on humans, but rather it has bootstrapped itself using humans as energy source (hello Matrix).
I am not saying it is a lost cause – who knows, maybe human-centered technological progress could be positively viewed by the society if it’s rate of change, defined on the qualia level, was greater? That is however not the case, at least here in the West in 2021 – but it might have been the reason back in the days.
What we have had for quite some time is a basically a system that promises utopia at some point in the future as only one of possible outcomes – and at the same time pretends that things are improving, going in the right direction which stands contrary to the feelings most people get.
Many people know who Ted Kaczynski is and have a brief mental model of his point of view, but not so many people read his manifesto, Industrial Society and Its Future, which I recommend to every technophile as well as technophobe. Personally, I could not find a good, self-contained rebuttal of his work which would erase most doubts that he raises and I have searched hard as I would wish to be more optimistic – basically I haven’t found anything that would make me ‘structurally’ optimistic about technology and its future as a member of homo sapiens. Aforementioned Letter from Utopia doesn’t cut it – there is no framework there except for a glimmer of hope in one of the possible futures, without addressing fundamental questions about human nature, but simply treating it as a blob that can fit anywhere. Moreover, guess what – it is written by the same author that is worried about AGI (among other things) making humanity go extinct. Kurzweil, for instance, does not ground his work on anything related to human nature as he views technology as a value in itself.
Basically, I am looking forward to reading an anarcho-singularitarian manifesto that would make me look forward to it (anarcho in the sense that utopia addresses Kaczynski’s ‘power progress’, is not based on humans as a manufactured product, but rather it is aligned to the best of human nature).
Does Pearce’s Hedonistic Imperative cut it though, published in the same year as Kaczynski’s manifesto, which I found as a remarkable coincidence? To me it feels as an engineered way to make us feel good about being part of the system, rather than being the center itself – basically an extreme form of today’s antidepressants, while not addressing Kaczynski’s need for ‘power process’.
These are just me brief thoughts on the subject and at the moment I am quite tired, but wanted to get it out there before the post disappears into the void – thus, apologies for any inconsistencies and lack of flow.
PS. I am looking forward to reading that manifesto from you. 😉
Anything related to biotech is not included here - care to explain the reason why?
There is nothing magical about it, so yes - AI will help, but humans are enough. I would expect a sustained, 100s of millions/$1bn 10 year effort would bring us a lot closer to this hardware technological 'maturity' (we are not even trying hard now - a mole of carbon-12 has a mass of 12g and it contains 6*10^23 atoms, yet there is nothing stopping us from building small machines with mere thousands/millions of atoms). Obviously you could say that this sort of money would help with anything, but I believe it would be one of the best value/$ projects).
With regards to molecular manufacturing, one could imagine a multitude of ideas that are perfectly feasible from classical physics standpoint yet nowadays are in the realm of sci-fi - examples including humanoid robots built bottom-up (with milliond of tiny motors, moving more majestically than a human), mechanical computers the size of a sugar cube with 10^21 FLOPS and approaching Landauer's principle (~computronium - https://youtu.be/yVX9Ob4SjGA) or as I mentioned previously, an autogenous replicator which would allow any object to be replicated, including itself (you know what the curve looks like...)
The second example ties with the AI/AGI - you do not have to worry about Moore's law in the narrow sense, even in terms of FLOPS/$ as current semiconductor substrate in simply a local maximum. Regardless of whether you are in the 'scaling' camp or not, more FLOPS would surely help test this hypothesis as well as many others... No one knows the hurdles in front of us, but it would surely 'help' to build AGI - obviously in quotation marks given the risks.
Digital matter, in other words, molecular manufacturing.
With a Star Trekian autogenous home synthesizer, one could expect Moore's Law-like growth.