I'd be interested in hearing more about Ryan's proposal to do better generalization science(or if you don't have much more to say in the podcast format I'd be interested in seeing the draft about it)
He's talking about the stuff around the simulation hypothesis and acausal trade in the preceding section.
Yeah "esoteric" perhaps isn't the best word. What I had in mind is that they're relatively more esoteric than "AI could kill us all" and yet it's pretty hard to get people to take even that seriously! "Low-propensity-to-persuade-people" maybe?
but “extremely unlikely” seems like an overstatement[...]
Yes this is fair.
Interesting. I'd wondered why you wrote so many pieces advising people to be cautious about more esoteric problems arising from AI, to an extent that seemed extremely unlikely to be implemented in the real world, but there being a chance simulators are listening to your arguments does provide an alternative avenue for influence.
I don't think we have much reason to think of all non-human-values-having entities as being particularly natural allies, relative to human-valuers who plausibly have a plurality of local control. I think you might be lumping non-human-valuers together in 'far mode' since we know little about them, but a priori they are likely about as different from each other as from human-valuers. There may also be a sizable moral-realist or welfare-valuing contingent even if they don't value humans per se. There may also be a general acausal norm against extortion since it moves away from the pareto frontier of everyone's values.
OK, so then so would whatever other entity is counterfactually getting more eventual control. But now we're going in circles.
A very slightly perturbed superintelligence would probably concieve of itself as almost the same being it was before,
OK but if all you can do is slightly perturb it then it has no reason to threaten you either.
Do you not think that causing their existence is something they are likely to want?
But who is they? There's a bunch of possible different future SIs(or if there isn't, they have no reason to extort us). Making one more likely makes another less likely.
re: 4, I dunno about simple, but it seems to me that you most robustly reduce the amount of bad stuff that will happen to you in the future by just not acting on any particular threats you can envision. As I mentioned there's a bit of a "once you pay the danegeld" effect where giving in to the most extortion-happy agents incentivizes other agents to start counter-extorting you. Intuitively the most extortion-happy agents seem likely to be a minority in the greater cosmos for acausal normalcy reasons, so I think this effect dominates. And I note that you seem to have conceded that even in the mainline scenario you can envision there will be some complicated bargaining process among multiple possible future SIs which seems to increase the odds of acausal normalcy type arguments applying. But again I think an even more important argument is that we have little insight into possible extorters and what they would want us to do, and how much of our measure is in various simulations etc(bonus argument, maybe most of our measure is in ~human-aligned simulations since people who like humans can increase their utility and bargain by running us, whereas extorters would rather use the resources for something else). Anyway, I feel like we have gone over our main cruxes by now. Eliezer's argument is probably an "acausal normalcy" type one, he's written about acausal coalitions against utility-function-inverters in planecrash.
[Epistemic status: vague speculation] I like the idea of consciousness being allocated based on projecting influence backwards into the past. Humans are currently the most conscious beings because we are the densest nexus of influence over the future, but this will eventually change. This seems to have the requisite self-consistency properties. e.g. if you are aiming to have a large influence over the future it's probably important to track other beings with the same property.
ETA: another perhaps better possibility is that consciousness is about being a bottleneck of information between the past and future.