My argument goes something like this: 1) throughout history, big differences in power have been a recipe for abuse; 2) uploading allows bigger power differences than ever existed before. It's a big concern to me and I'm not sure we can "wing it", it's better to have a plan now.
Yeah. As an example of one of the "nuances" that would have to be thrown out, maybe an effective anti-AI movement would even have to be anti-AI-alignment. As in, it would tell young people "don't work on AI alignment".
Yeah. I guess AIs would need to protect humans from certain messages not only based on the content of the message, but also how it was generated (e.g. using AI or not) and for what purpose (e.g. for manipulation or not). And sometimes humans need to be protected even from ideas that they themselves come up with (e.g. delusions, or totalitarian ideologies). In general, I think human life in a world with smarter-than-human AIs requires deliberate "habitat preservation", which in turn requires AIs to make some judgment calls on what's good or bad for humans. There's just no way around that.
Hmm. You're describing a future where most humans are powerless, but keep being provided for. It seems to me that the most likely way to get such a future is if AIs (or human+AI organizations, or whatever) genuinely care about humans. But then they would also protect humans from super-optimized manipulation, no?
Or if that genuine care doesn't exist, and UBI is provided as "scraps" so to speak, then the fate of humans is sealed anyway. As soon as the entities in power find something more interesting to do with the resources, they'll cut welfare and that's it. After all, the energy upkeep of a human could be used for a ton of computation instead.
I agree with others about the fawning. A more "hardball" question I'd ask is: why not the left? It feels at some point a choice was made to build a libertarian-leaning techie community, which backfired: rationalists and adjacent folks ended up playing a big role in building and investing in AI. Maybe a more left-leaning movement focused on protest and the like would make more sense now?
Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it's an optimum (even a local one). Consider this: the question "how can humans have true agency and other things they value, when ASIs are around" is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That's clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that's optimal on the mix of dimensions that we care about.
A world of competing human emulations is a world I would actually want to live in
I think there's a huge danger of people running private servers full of emulations and doing anything they want to them, undetectably. Desire for power over others is a very real thing, in some people at least. Maybe the government could prevent it by oversight; but in modern democracy a big factor of stability is that people could rise up and feasibly overthrow the government. Emulations on private servers wouldn't have that power, so I don't expect government to stably defend their rights. It'll wash out over time, to agree more with the interests of those who can actually influence government. In short, this leads to emulation-world being very bad and I don't want it.
The same arguments would apply to our world if governments got armies of autonomous drones, for example. Whenever I imagine possible worlds, the distribution of power is the first thing I think about. It makes the problem more real: it's very hard to imagine a nice future world that works.
Why do you think all poor people will end up in these "wildlife preserves", and not somewhere else under the power of someone less altruistic? A future of large power differences is... a future of large power differences.
Ah I see, I misunderstood your point. You're right.
This seems to rely on the assumption that "there's nowhere to go but up": that we're pretty certain of doom, so wildcards in the future can only make us less certain. Unfortunately that doesn't work. Wildcards in the future can also increase s-risks, and there's no limit how bad things can get, potentially much worse than extinction.