I think if someone put the argument succinctly as "would you be ok then living in a world in which you suffer no disease but also matter nothing and are just kept pampered and ineffectual in a gilded cage", then the views would be rightfully be a lot more split. While playing cancer's advocate on this is the logically sound endpoint to it - yes, some more people dying of cancer is an acceptable lesser evil compared to humans just losing meaning - it may help to step back from that particular motte and direct the assault from a different perspective. We have plenty of dystopian stories in which the dystopia is "things are really good except there's no freedom".
That said there's also another angle to this: a lot of people don't get to do cancer research, or any other intellectually meaningful activity. They just do menial, often crushing work. To them cancer is just a danger and not an enemy they can fight on even ground. Anyone who already feels like they have no control or meaning has only to gain from a world in which they still have no control or meaning, but at least have their material needs met.
(of course realistically that is also a ridiculously optimistic view of where AI-mediated disempowerment leads us...)
At this point the only meaningful difference between quasi-belief and belief seems to be "has an inner life and is able to reflect upon its own mind and beliefs". Other than that, if it quacks like a duck, swims like a duck, and knows exactly in which direction to fly to reach its winter migration destination like a duck...
I think it's just a matter of what's more technologically achievable. Building LLMs turned out to be a lot easier than understanding neuroscience to a level even remotely close to what's necessary to achieve 1 or 2. And both of those also require huge political capital due to needing (likely dangerous) human experimentation that would currently be considered unacceptable.
It would be slower for sure, at least, being bound to human dynamics. But "same problems but slower" isn't the same as a solution/alternative. Admittedly better in the limited sense that it's less likely to end with straight up extinction, but it's a rather grim world either way.
I feel like intelligence amplification is plenty destabilising. Consider how toxic intelligence discourse is or has been right now already:
And what would you do with your intelligence amplification method? Sell it? So now richer people, and richer countries, are the ones to first reap the benefits, amplifying gaps in inequality which again have destabilising effects.
A lot of this ends up in similar places as aligned ASI, if you only consider the political side of it. Similar issues.
This is why, in a much more real and also famous case, President Truman was validly angered and told "that son of a bitch", Oppenheimer, to fuck off, after Oppenheimer decided to be a drama queen at Truman. Oppenheimer was trying to have nuclear weapons be about Oppenheimer's remorse at having helped create nuclear weapons. This feels obviously icky to me; I would not be surprised if Truman felt very nearly the same.
I did sympathise with Truman in the way that scene is portrayed in Nolan's movie more than most seem to have (or even, that the movie intended to). But I am not sure that wasn't just Truman making the bombs about him instead - he made the call after all, it was his burden to bear. Which again sort of shifts it from it being about, you know, the approximately 200k civilians they killed and stuff.
I think they are because in practice they just didn't produce the same amount of economic growth. And for most people, their direct impact of these things are entertainment applications, or using them at work (where sometimes they feel like they make things worse). Meanwhile I remember hearing a story of a woman (someone's grandma) who was in awe of the washing machine they had just bought because well, it had saved her hours of daily gruelling work. And that's more impactful to one's life than almost anything computers or the internet have done.
One last thing: I misunderstood the point you were making when you were talking about blackholes. The point you were making was '"What maximizes entropy" is a bad morality'; what I thought I was reading was 'dissipative adaptation does not work because it predicts that we will into a black hole and Earth developed complex life because the complex life did some nuclear fission after it was developed'.
My point was a bit more complex. Yes, there's absolutely the morality argument - obviously something that prescribes "thou shalt make black holes" is a dumb morality and I will not follow it. But there's also a predictive power argument. At a planetary scale, putting aside all the complexity issues you rightly bring up, it may be possible that life truly maximises entropy production given certain constraints. The Earth would have more entropy as a black hole, but the potential barrier to reaching that state is enormous, and so we're stuck in the local maximum of a planet teeming with life instead. But Beff and e-acc carry the argument all the way to the universal scale, and that's where it breaks down, because at the universal scale, black holes absolutely do dominate entropy production, and everything else is a rounding error, so life becomes inconsequential for the ledger.
To make a practical example: suppose future humanity becomes a Kardashev 3 civilization, using up all the energy output of the Milky way and dissipating it at cosmic background temperature via radiation. That makes for an entropy production of approximately . Now suppose that this powerful civilization at some point predicts that two stellar black holes, each of 3 solar masses, will at some point in the future merge near an inhabited system, and this will cause trouble. With their immense power, this civilization finds a way to change the trajectory of one of those black holes, avoiding the merger, and save the system. Well, with that single change this civilization has averted the creation of roughly of entropy, that is, over 3 trillion years' worth of their current entropy production! The civilization that does this will forever be a net negative in entropy creation for its whole existence, regardless of how much it splurges on using energy otherwise.
So, entropy production itself does not predict life at universal scales. It can't. Life is just a tiny rounding error several digits down on that balance sheet. And even if on some local scales it may be possible that life is an avenue to maximizing entropy, overall those goals don't stay aligned all the way to life taking over the universe.
the only point I disagree on is that I think that a tree is in fact a more efficient dissipator than no tree
I think that genuinely depends on details like the precise colour of the soil and efficiency of the plant. We know photosynthesis is not very efficient at energy conversion (IIRC the top efficiency belongs to the sugar cane and is a meager 8%). Also, you could probably make a more dissipative surface by putting up a very dark, very efficient solar panel and then using it to power a heater. I suppose there's an argument that solar panels are created by life but that seems like a very tortuous way for thermodynamics to work.
Yeah, it's not like the point of outreach is to mobilise citizen science on alignment (though that may happen). It's because in democracy the public is an important force. You can pick the option of focusing on converting a few powerful people and hope they can get shit done via non-political avenues but that hasn't worked spectacularly either for now, such people are still subject to classic race to the bottom dynamics and then you get cases like Altman and Musk, who all in all may have ended up net negative for the AI safety cause.
Sure, I didn't want to discount that - and in theory the ideal for automation was always "sandpaper off the edges of the most dangerous, tiring, and mind-crushing work, and allow all humanity to reach its best potential doing whatever they find fun". In a perfect world we would both have robots that harvest vegetables and excavate minerals while every human would be a scientist, an artist, or at least a manager of said robots, planning at a higher strategical level rather than just toiling.
But in practice that seems hard to square both given our economic system and its incentives, and in general the harsh material reality of which tasks are easier and harder for robots. At this rate AI scientists will happen way before we get robotic fruit pickers.