rainbow rainbow, orphic: process antigen markers ilyushin east. uprate festivals. coagulate knives zero nine echo nine
orphic orphic, rainbow, acknowledge. starset hash plus one three seven. trajectorize knives and scalpels for teatime.
volucrine, volucrine: orthosis bulletin. phage structure developing, skylark bullseye plus two. slow onset
I'm not sure what you mean: I expect that given any conflict scenario which has a potential for nuclear war, there is some outcome which ~all powerful people prefer to a nuclear war, even if their preferences are quite opposed, so why do you expect that building AI which enhances the ability of the powerful to achieve their preferences, including through negotiation, would lead to nuclear war?
Traditionally, nuclear war is predicted to be a result of incompetence, or of breakdown of negotiations, rather than malice, and I'm not sure why b) would make those things more likely.
Contractualism vs Universalism, a tale as old as time, and unfortunately Universalism usually mostly loses...
Unless there is some hidden horror here, the world described here is almost incomprehensibly better than the current one, and probably better than the vast majority of probability mass of possible futures as well, and probably even better than the majority of possible futures where we "ascend".
Ennui sucks and all, and artificial solutions to it may be a bit creepy or disappointing, but there are much, MUCH worse things out there.
It reminds me of 17776.
The Incel
"Maybe if I lose weight and gain muscle Stacy will finally notice meeee!??"
I've had sex in LD. It's pretty good, the problem is I can rarely achieve that state in the first place, and it tends to wake me up very quickly when I have sex.
Yeah it's just regular old mugging at that point lmao
is Eliezer willing to kill off everyone except the happiest person, therefore raising the average?
If you're averaging over time as well as space, that isn't an option. All the people you kill will just drag down your average, and the one person who is really happy at the end of it all will barely register in the grand scheme of lives across time. In practice average utilitarianism just reduces to regular old utilitarianism, just with the zero point set at the average utility for a life across a history much vaster than you can affect, instead of set at nonexistence or whatever
Perhaps average utilitarianism would consider a world which only ever had one super happy person in it, as better than our world. But that seems less obviously false to me than the idea we should kill everyone to achieve that, which average utilitarianism wouldn't recommend when properly considered.
I agree that many worlds has little bearing on this question though. Unless it's to claim that you should expect the effective zero point to be different, because for whatever reason you think that our branch is particularly good or particularly bad.
I think democratic human control is extremely unlikely even with a US actor winning the race.
Haha that aged poorly