You're right. I'm updating towards illusionism being orthogonal to anthropics in terms of betting behavior, though the upshot is still obscure to me.
I agree realism is underrated. Or at least the term is underrated. It's the best way to frame ideas about sentientism (in the sense of hedonic utilitarianism). On the other hand, you seem to be talking more about rhetorical benefits of normative realism about laws.
Most people seem to think phenomenal valence is subjective, but that's confusing the polysemy of the word "subjective", which can mean either arbitrary or bound to a first-person subject. All observations (including valenced states like suffering) are subjective in the second sense, but not in the first. We have good evidence for believing that our qualities of experience are correlated across a great many sentient beings, rather than being some kind of private uncorrelated noise.
"Moral realism" is a good way to describe this situation that we're in as observers of such correlated valences, even if God-decreed rules of conduct isn't what we mean by that term.
it is easy to cooperate on the shared goal of not dying
Were you here for Petrov Day? /snark
But I'm confused what you mean about a Pivotal Act being unnecessary. Although both you and a megacorp want to survive, you each have very different priors about what is risky. Even if the megacorp believes your alignment program will work as advertised, that only compels them to cooperate with you if they are (1) genuinely concerned about risk in the first place, (2) believe alignment is so hard that they will need your solution, and (3) actually possess the institutional coordination abilities needed.
And this is just for one org.
World B has a 1, maybe minus epsilon chance of solving alignment, since the solution is already there.
That is totum pro parte. It's not World B which has a solution at hand. It's you who have a solution at hand, and a world that you have to convince to come to a screeching halt. Meanwhile people are raising millions of dollars to build AGI and don't believe it's a risk in the first place. The solution you have in hand has no significance for them. In fact, you are a threat to them, since there's very little chance that your utopian vision will match up with theirs.
You say World B has chance 1 minus epsilon. I would say epsilon is a better ballpark, unless the whole world is already at your mercy for some reason.
Okay, let's operationalize this.
Button A: The state of alignment technology is unchanged, but all the world's governments develop a strong commitment to coordinate on AGI. Solving the alignment problem becomes the number one focus of human civilization, and everyone just groks how important it is and sets aside their differences to work together.
Button B: The minds and norms of humans are unchanged, but you are given a program by an alien that, if combined with an AGI, will align that AGI in some kind of way that you would ultimately find satisfying.
World B may sound like LW's dream come true, but the question looms: "Now what?" Wait for Magma Corp to build their superintelligent profit maximizer, and then kindly ask them to let you walk in and take control over it?
I would rather live in world A. If I was a billionaire or dictator, I would consider B more seriously. Perhaps the question lurking in the background is this: do you want an unrealistic Long Reflection or a tiny chance to commit a Pivotal Act? I don't believe there's a third option, but I hope I'm wrong.
I agree that the political problem of globally coordinating non-abuse is more ominous than solving technical alignment. If I had the option to solve one magically, I would definitely choose the political problem.
What it looks like right now is that we're scrambling to build alignment tech that corporations will simply ignore, because it will conflict with optimizing for (short-term) profits. In a word: Moloch.
It's happened before though. Despite being one of those 2 friends, I've already been forced to change my habits and regard videocalls as a valid form of communication.
none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn't that entail that we should be Halfers in the SBP? Not saying it's wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
I can see how a computer could simulate any anthropic reasoner's thought process. But if you ran the sleeping beauty problem as a computer simulation (i.e. implemented the illusionist paradigm) aren't the Halfers going to be winning on average?
Imagine the problem as a genetic algorithm with one parameter, the credence. Wouldn't the whole population converge to 0.5?
Exactly. I wish the economic alignment issue was brought up more often.