For what it matters, I still use an MP3 player. I've had people look at it a bit bemused. It's still the most useful way of listening to music I can imagine. Super simple, doesn't stop working when my train is inside a tunnel, and isn't as desperately battery-hungry as my phone (which I generally need charged for more useful reasons anyway). That's an obvious case of a piece of tech that was really convenient and useful and only got phased out because it was a pain from the seller's side to support (it would have been easy if anyone had said "fuck it" and started selling DRM-free music but of course can't do that, what if you copied it).
Sure, I didn't want to discount that - and in theory the ideal for automation was always "sandpaper off the edges of the most dangerous, tiring, and mind-crushing work, and allow all humanity to reach its best potential doing whatever they find fun". In a perfect world we would both have robots that harvest vegetables and excavate minerals while every human would be a scientist, an artist, or at least a manager of said robots, planning at a higher strategical level rather than just toiling.
But in practice that seems hard to square both given our economic system and its incentives, and in general the harsh material reality of which tasks are easier and harder for robots. At this rate AI scientists will happen way before we get robotic fruit pickers.
I think if someone put the argument succinctly as "would you be ok then living in a world in which you suffer no disease but also matter nothing and are just kept pampered and ineffectual in a gilded cage", then the views would be rightfully be a lot more split. While playing cancer's advocate on this is the logically sound endpoint to it - yes, some more people dying of cancer is an acceptable lesser evil compared to humans just losing meaning - it may help to step back from that particular motte and direct the assault from a different perspective. We have plenty of dystopian stories in which the dystopia is "things are really good except there's no freedom".
That said there's also another angle to this: a lot of people don't get to do cancer research, or any other intellectually meaningful activity. They just do menial, often crushing work. To them cancer is just a danger and not an enemy they can fight on even ground. Anyone who already feels like they have no control or meaning has only to gain from a world in which they still have no control or meaning, but at least have their material needs met.
(of course realistically that is also a ridiculously optimistic view of where AI-mediated disempowerment leads us...)
At this point the only meaningful difference between quasi-belief and belief seems to be "has an inner life and is able to reflect upon its own mind and beliefs". Other than that, if it quacks like a duck, swims like a duck, and knows exactly in which direction to fly to reach its winter migration destination like a duck...
I think it's just a matter of what's more technologically achievable. Building LLMs turned out to be a lot easier than understanding neuroscience to a level even remotely close to what's necessary to achieve 1 or 2. And both of those also require huge political capital due to needing (likely dangerous) human experimentation that would currently be considered unacceptable.
It would be slower for sure, at least, being bound to human dynamics. But "same problems but slower" isn't the same as a solution/alternative. Admittedly better in the limited sense that it's less likely to end with straight up extinction, but it's a rather grim world either way.
I feel like intelligence amplification is plenty destabilising. Consider how toxic intelligence discourse is or has been right now already:
And what would you do with your intelligence amplification method? Sell it? So now richer people, and richer countries, are the ones to first reap the benefits, amplifying gaps in inequality which again have destabilising effects.
A lot of this ends up in similar places as aligned ASI, if you only consider the political side of it. Similar issues.
This is why, in a much more real and also famous case, President Truman was validly angered and told "that son of a bitch", Oppenheimer, to fuck off, after Oppenheimer decided to be a drama queen at Truman. Oppenheimer was trying to have nuclear weapons be about Oppenheimer's remorse at having helped create nuclear weapons. This feels obviously icky to me; I would not be surprised if Truman felt very nearly the same.
I did sympathise with Truman in the way that scene is portrayed in Nolan's movie more than most seem to have (or even, that the movie intended to). But I am not sure that wasn't just Truman making the bombs about him instead - he made the call after all, it was his burden to bear. Which again sort of shifts it from it being about, you know, the approximately 200k civilians they killed and stuff.
I think they are because in practice they just didn't produce the same amount of economic growth. And for most people, their direct impact of these things are entertainment applications, or using them at work (where sometimes they feel like they make things worse). Meanwhile I remember hearing a story of a woman (someone's grandma) who was in awe of the washing machine they had just bought because well, it had saved her hours of daily gruelling work. And that's more impactful to one's life than almost anything computers or the internet have done.
<insert Austin Powers 'RUN, IT'S GODZILLA!' scene>