I think that's better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it's fine because if you're creating a mind from scratch it'd be the height of stupidity to create an enemy.
In the same way, "AI Alignment" excludes e.g. people who are inclined to believe superintelligences will know better than us what is good, and who don't want to hamstring them. You can think we're well rid of these people. But you're still excluding people and thereby reducing the amount of thinking that will be applied to the problem.
I'm not sure what can someone who essentially thinks there is no problem contribute to its solution. That said, I get the gist of the argument and you do have a point IMO about stressing the two complementary aspects of a mind. Maybe Artificial Volition? Intention feels to me like it alliterates so much with Intelligence it circles back from catchiness to being confusing.
Running water doesn't create the conditions to permanently disempower almost everyone, AGI does. What I'm talking about isn't a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It's a permanent trap that destroys democracy and capitalism as we know them.
The wealthy are not powerful enough to "hoard" treatments, because Medicare et al represent the government, which has a monopoly on violence and incentives to not allow such hoarding.
That's naive. If a private has obedient ASI, they also have a monopoly on violence now. If labour has become superfluous, states have lost all incentive to care about the opinion of people.
Bigfoot is arguably less plausible, a priori.
Is it? A priori, Bigfoot is just some unknown small population of a large mammal living in a remote forest, possibly a living fossil of, e.g., a giant ground sloth species. That's more possible than alien crafts. Not alien life, mind you, but crafts require interstellar travel to be plausible, and we have reason to doubt that. Even unmanned Von Neumann probes would have a very hard time arriving to their destination still functioning (never mind braking...), and non-inertial engines presume a violation of known physics so deep, it's unbelievable we've missed all signs of it being possible until now.
I agree technological unemployment is a huge potential problem. Though like always the actual problem is aging.
I think this is a typical LW bias. No, I don't enjoy the idea of death. But I would rather live a long and reasonably happy life in a human friendly world and then die when I am old than starve to death as one of the 7.9 billion casualties of the AGI Wars. The idea that there's some sliver of a chance that in some future immortality is on the table for you, personally, is a delusion. I think life extension is very possible, and true immortality is not. But as things are either would only be on the table for, like, the CEOs of the big AI companies who got their biomarkers registered as part of the alignment protocol so that their product obeys them. Not for you. You're the peasant whose blood, if necessary, cyber-Elizabeth Bathory will use for her rejuvenation rituals.
I think the doom narrative is still worth bringing up because this is what these people are risking for all of us in the pursuit of essentially conquering the world and/or personal immortality. That's the level of insane supervillainy that this whole situation actually translates to. Just because they don't think they'll fail doesn't mean they're not likely to.
I'm also disappointed that the political left is dropping the ball so hard on opposing AI, turning to either contradictory "it's really stupid, just a stochastic parrot, and also threatens our jobs somehow" statements, or focusing on details of its behaviour. There's probably something deeper to say about capitalists openly making a bid to turn labour itself into capital.
Yup. These precise points were also the main argument of my other post on a post-AGI world, the benevolence of the butcher.
Also due to the AI discourse I've actually ended up learning more about the original Luddites and, hear hear, they actually weren't the fanatical, reactionary anti-technology ignorant peasants that popular history mainly portrays them as. They were mostly workers who were angry about the way the machines were being used, not to make labour easier and safer, but to squeeze more profit out of less skilled workers to make lower quality products which in the end left almost everyone involved worse off except for the ones who owned the factories. That's I think something we can relate to even now, and I'd say is even more important in the case of AGI. The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn't immaterial, in fact it looks like the default outcome.
In addition to all you've said, this line of reasoning ALSO puts an unreasonable degree of expectation on ASI's potential and makes it into a magical infinite wish-granting genie that would thus be worth any risk to have at our beck and call. And that just doesn't feel backed by reality to me. ASI would be smarter than us, but even assuming we can keep it aligned (big if), it would still be limited by the physical laws of reality. If some things are impossible, maybe they're just impossible. It would really suck ass if you risked the whole future lightcone and ended up in that nuclear-blasted world living in a bunker and THEN the ASI when you ask it for immortality laughs in your face and goes "what, you believe in those fairy tales? Everything must die. Not even I can reverse entropy".
One bit of timeline arguing: I think odds aren't zero that we might be on a path that leads to AGI fairly quickly but then ends there and never pushes forward to ASI, not because ASI would be impossible in general, but because we couldn't reach it this specific way. Our current paradigm isn't to understand how intelligence works and build it intentionally, it's to show a big dumb optimizer human solved tasks and tell it "see? We want you to do that". There's decent odds that this caps at human potential simply because it can imitate but not surpass its training data, which would require a completely different approach.