Engineer at CoinList.co. Donor to LW 2.0.
If it's true that simulating that universe is the simplest way to predict our human, then some non-trivial fraction of our prediction might be controlled by a simulation in another universe. If these beings want us to act in certain ways, they have an incentive to alter their simulation to change our predictions.
I find this confusing. I'm not saying it's wrong, necessarily, but it at least feels to me like there's a step of the argument that's being skipped.
To me, it seems like there's a basic dichotomy between predicting and controlling. And this is claiming that somehow an agent somewhere is doing both. (Or actually, controlling by predicting!) But how, exactly?
Is it that:
My guess is that it's the second thing (in part from having read, and very partially understood, Paul's posts on this a while ago). But then I would expect some discussion of the "treacherous turn" aspect of it -- of the fact that they have to predict accurately for a while (so that we rate them highly in our ensemble of programs), and only then can they start outputting predictions that manipulate us.
Is that not the case? Have I misunderstood something?
(Btw, I found the stuff about python^10 and exec() pretty clear. I liked those examples. Thank you! It was just from this point on in the post that I wasn't quite sure what to make of it.)
FYI I think your second link is broken.
I'm not sure I understand your question at the end. Are you asking if people do indeed want to become part of the elite?
If so, it doesn't seem too mysterious to me. People want to be liked, they want to be respected. There are drives both for prestige and dominance. People want the highest quality mates and allies that they can get. Doesn't everything we know about human nature suggest that all else equal, if there are social hierarchies, people will prefer to be at the top of them?
It also rules out Cascadian cities like Portland and Seattle - only marginally better housing costs, worse fires, and worse social decay (eg violence in Portland).
I'm not sure this is so conclusive, regarding Seattle. A few notes --
Here's an analogy -- is Hamlet conscious?
Well, Hamlet doesn't really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
Hmm, it's not so much about how similar it is to me as it is like, whether it's on the same plane of existence.
I mean, I guess that's a certain kind of similarity. But I'm willing to impute moral worth to very alien kinds of consciousness, as long as it actually "makes sense" to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.
I don't have a good answer for this. I'm kinda still at the vague intuition stage rather than clear theory stage.
It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism).
For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.
I don't have a well-developed theory here. But a few related ideas: