I mean, do you guys, like, know why Greenpeace is against some of these market solutions? I didn't know either, but in five minutes of googling I was able to find some arguments. Here's an example argument: in the world there are poor countries and rich countries. Poor countries are not always ruled in their people's best interest; and rich countries and corporations don't always act in poor countries' best interest, either. So, what would happen if a rich country paid a dictator of a poor country a billion dollars to irrevocably mess up the poor country's environment? What would happen? Huh?
Maybe in more than five minutes you could find other arguments too. Anyway, fast-tracking your readers straight to "Greenpeace is your enemy" doesn't feel right.
Because I'm not indifferent between "I get 1 utility and Bob gets 0" and "I get 0 utility and Bob gets 1". I'm bargaining with Bob to choose a specific point on that segment, maybe {0.5,0.5}.
If there are multiple tangent lines at the point, then there's a range of possible weight ratios, and the AIs will agree to merge at any of them because they lead to the same point anyway. So there's no need for coinflips in this case.
I was thinking that the need for coinflips arises if the frontier has a flat region. For example let's say the frontier contains a straight line segment AB, and the AIs have negotiated a merge that leads to some particular point C on that segment. (For example this happens if the AIs are in an ultimatum game situation, where each side's gain is another's loss, so the frontier is a straight line and they're bargaining to pick one particular point on that line.) Then they can merge into the following AI: "upon waking up, first make a weighted coin toss with weights according to where C lies on AB; then either become a EUM agent that forever optimizes toward A, or become a EUM agent that forever optimizes toward B, according to the coin result." According to both AIs the expected utility of that is exactly the same as aiming toward point C, so they'll agree to the merge.
But yeah, it's even more subtle than that: for example if the segment AB doesn't end with corners but with smooth arcs, then there's no way to make a EUM agent optimizing toward A or B in particular. Then there needs to be a limit procedure, I guess.
Well, we can see that corporations owned by everyone (public utilities) mostly don't behave as sociopathically. They have other pathologies, but not so much this one. So, because everything is a matter of degree, I would assume that making ownership more distributed does make corporations less nasty. And the obvious explanation is that if you're high above almost all people, that in itself makes you behave sociopathically. Power disparity between people is a kind of evil-in-itself, or a cause of so many evils that it might as well be. So I stand by the view that "no billionaires" is reasonable.
I think the main reason people move to cities isn't because cities are charming. It's because cities are objectively better places economically, you could say it's a kind of Keynesian beauty contest. If you're a business, you want to be located somewhere with many job seekers and potential clients nearby. So people who want jobs and services will also want to live nearby, and so on.
If teleportation was invented tomorrow and people could blink around at low cost, I expect that people would instantly spread out to live on their own patches of land, and cities would become mostly just places to visit and maybe work. The city charm wouldn't keep anyone living in a cramped apartment with neighbors above and below, if the economic reason for that disappeared. When I first imagined this scenario, I thought to myself that maybe we're lucky teleportation hasn't been invented yet :-)
I think it's more than just "irresponsible". A big part of the argument is that billionaires have the capacity and incentive to do bad things. Like Facebook trying to get everyone hooked on their feed, or Walmart having tons of employees on food stamps. (Or the East India Company fighting a war to keep selling opium to the Chinese...) Big power motivated by greed doesn't always lead to good things. Sometimes it does, but it's not obvious that it's the best way.
Your argument seems to be that it'll be hard for the CEO to align the AI to themselves and screw the rest of the company. Sure, maybe. But will it be equally hard for the company as a whole to align the AI to its interests and screw the rest of the world? That's less outlandish, isn't it? But equally catastrophic. After all, companies have been known to do very bad things when they had impunity; and if you say "but the spec is published to the world", recall that companies have been known to lie when it benefited them, too.
all of this has become possible thanks to the dominance of offensive technology
I used to think that way too, but now I think it's the other way around. The strong can always hurt the weak somehow, that's just a fact of life, offense/defense ratio doesn't change it much either way. But for the strong to hurt the weak with impunity, it's necessary that the weak can't hurt the strong right back. In other words, it depends mainly on the strong's defensive tech, or the weak's lack of offensive tech.
I've been making a similar point in a kind of distributed fashion for many months, spread out across many comments: that the key factor in world niceness is decentralization - but not decentralization of economic power or productivity (a slave in a diamond mine can be very productive). Rather, the key is decentralization of military potential, and even more specifically, offensive potential. Not defensive. The spreading-out of threat. Democracy being downstream from "it's easy to teach a peasant to shoot a gun and kill a knight". You reach that point by the end of the post, and I think it's the most important thing.
On a related note, I'd like to push back against your idealism over cryptography, and defensive tech in general. In my eyes, defensive tech simply isn't as strong a force for good. For cryptography in particular, up to a point it suffers from the five dollar wrench problem: it doesn't actually let you keep a secret from someone stronger. They'll just beat the secret out of you. You could say citizens can hide the fact that they have a secret at all - but to be immune to things like statistical analysis of behavior, a citizen needs a truly ridiculous amount of opsec. Can you install Tor on your machine and say confidently that nobody knows you did it? Have you thought through what it truly takes? And on the upper levels, we run into the ultimate problem with defensive tech: it can defend bad things, too. Imagine mass torture of simulated beings happening under homomorphic encryption, and the vision of a cryptographically happy world won't look so happy.
To break through such things, what we really need is dominance of offensive tech, which makes it militarily useful to coopt little guys instead of oppressing them. But then we run smack into the problem that future AI weapons aren't such a tech, they're the opposite: empowering the bigger guy beyond all reason. And there the wheels fall off, I have no idea how to continue thinking optimistically past that point. The future just looks like tyranny no matter what.
I agree with the point about acknowledging enmity in general; I'm not shy to do so myself. But the post didn't convince me that Greenpeace in particular is my enemy. For that I'd need more detailed arguments.