How would one "prove" that? I'm struggling to understand what having such a "proof" would even mean, let alone how it could be "proved".
Maybe the word "simulation" is confusing us here? I don't think "the world is a simulation" requires that a "base reality" exists and has the same physics, or anything like that. And if it did mean that, then I think we should all be talking about some other hypothesis, not that version of the "simulation hypothesis". If we were to run a huge Conway's Life, and it had intelligent creatures in it, and they said the X hypothesis was true (of their universe), they should be straightforwardly correct. Because that's what causes all the ontological and epistemic weirdness.
I mean, one could try to argue that certain kinds of computed-in-a-base-universe universes are more likely than others. But most of the ordinary sources of evidence would have, at best, serious reliability doubts, if we're systematically deluded about the nature of the world in which we live. I'm curious what you have in mind here.
I doubt this reading was intended, but the whole article makes a great joke if the very last thing in the article, item 8, "We are not going to create a paperclip maximizer.", is the punch line. Like, I have just presented all this utterly absurd nonsense, and now I have given you an alternative: abandon a belief you hold even though almost certainly hating the holding of it, hating the feeling that it's true but believing that it is anyway. It's like a gong.
Doesn't actually land, on me. I hold out hope for a way of addressing the simulation hypothesis that actually makes sense, rather than these silly assumptions that of course the simulation doesn't distort ANYTHING really IMPORTANT, everything we'd guess about metaphysics and metaphilosophy for that matter is free of fatal flaws, at one go, EVEN THOUGH WE'RE IN A SIMULATION! But still, if "#8 is just true" was even a back of mind idea - the thing that made you put that reason, specifically, dead last in the list, perhaps - well played.
EDITED TO ADD: But I see the serious case for it too, that if I personally held these ideas that I currently consider utterly batty, this would in fact be the best thing to do, and if doing that is cheap enough, I think we're all so freaking confused about this that maybe doing something so silly but cheap and probably-good-in-expectation is laudable. Not trying to de-platform you (I hate that we need a word for that concept). Just found an entertaining alternate reading.
What You Don't Understand Can Hurt You (many variations possible, with varied effects)
Improve Your (Metaphorical) Handwriting
Make Other People's Goodharting Work For You (tongue in cheek, probably too biting)
Make Surviving ASI Look As Hard As It Is
Unsolved Illegible Problems + Solved Legible Problems = Doom
"As you can see", "serious adults", "really" and "fine" all (mildly) demonstrate a sense of incredulity. "Look what these people actually believe! Just in case you thought it was a strawman." It's admittedly subtle, not stated, and I can see how someone could miss it. (I'll feel pretty stupid if I'm wrong.)
Don't book publication deals typically involve an exclusive license, not copyright assignment? (The effect is roughly the same, for the purposes of the question being answered here, of course.)
I would guess this is about "getting the right things into context", not "being able to usefully process what is in context". (AI already seems pretty good at the latter, for a broad though not universal set of tasks.)
It doesn't sound quite right to me that there are different possible cultures for any given number of echoes. I think it's more like... you memoize (compute on first use and also store for future use) what will or is likely to happen, in a conversation, as a result of saying a certain kind of thing. The thrust, or flavor, or whatever metaphor you prefer, of saying that kind of thing, starts to be associated with however the following conversation (or lack thereof) seems likely to go.
People don't actually have to be aware at all of all the levels at any one time. Precomputed results can themselves derive from other precomputed results. Someone doesn't have to be able to unpack one of these chains at all to use it. Sometimes some of the earlier judgments were actually made by someone else and the speaker is just parroting opinions he or she can't justify! (This is not necessarily a criticism. Each human does not figure everything out from scratch for himself or herself. In the good cases, I think the chain probably could be unpacked through analysis and research, if needed.)
But there remains something like the "parity" (evenness or oddness) of the process, in addition to its depth. (More depth is of course good, as long as it's accurate. It often isn't accurate, and more levels means more chances for it to diverge. I would guess this is the main reason some people (often including me) prefer lower depth - they don't expect the higher depth inferences to be sufficiently accurate to guide action. As they often aren't.) This manifests as whether we look for fault in the speaker or in the listener. This too is of course not a single value, but it's an apportionment, not a number of echoes. There is (I think) a tendency to look more towards the speaker or the listener(s) for fault (or credit, if communication goes well!), and THAT is what I think ask and guess culture are about. It ends up being something like the sum of a series in which the terms have a factor of (-1)^n.
(I agree with the overall thrust of this post that "you could just not respond!" references an action that, while available, is not free of cost such that one can simply assume that leaving a comment will consume none of the author's time and attention unless he or she wants it to.)
I am saying you do not literally have to be a cog in the machine. You have other options. The other options may sometimes be very unappealing; I don't mean to sugarcoat them.
Organizations have choices of how they relate to line employees. They can try to explain why things are done a certain way, or not. They can punish line employees for "violating policy" irrespective of why they acted that way or the consequences for the org, or not.
Organizations can change these choices (at the margin), and organizations can rise and fall because of these choices. This is, of course, very slow, and from an individual's perspective maybe rarely relevant, but it is real.
I am not saying it's reasonable for line employees to be making detailed evaluations of the total impact of particular policies. I'm saying that sometimes, line employees can see a policy-caused disaster brewing right in front of their faces. And they can prevent it by violating policy. And they should! It's good to do that! Don't throw the squirrels in the shredder!
I don't think my view is affluent, specifically, but it does come from a place where one has at least some slack, and works better in that case. As do most other things, IMO.
(I think what you say is probably an important part of how we end up with the dynamics we do at the line employee level. That wasn't what I was trying to talk about, and I don't think it changes my conclusions, but maybe I'm wrong; do you think it does?)
I have trouble understanding what's going on in people's heads when they choose to follow policy when that's visibly going to lead to horrific consequences that no one wants. Who would punish them for failing to comply with the policy in such cases? Or do people think of "violating policy" as somehow bad in itself, irrespective of consequences?
Of course, those are only a small minority of relevant cases. Often distrust of individual discretion is explicitly on the mind of those setting policies. So, rather than just publishing a policy, they may choose to give someone the job of enforcing it, and evaluate that person by policy compliance levels (whether or not complying made sense in any particular case); or they may try to make the policy self-enforcing (e.g., put things behind a locked door and tightly control who has the key).
And usually the consequences look nowhere close to horrific. "Inconvenient" is probably the right word, most of the time. Although very policy-driven organizations seem to have a way of building miserable experiences out of parts any one of which might be best described as inconvenient.
I'm not sure I agree who's good and who's bad in the gate attendant scenario. Surely getting angry at the gate attendant is unlikely to accomplish anything, but if (for now; maybe not much longer, unfortunately) organizations need humans to carry out their policies, the humans don't have to do that. They can violate the policy and hope they don't get fired; or they can just quit. The passenger can tell them that. If they're unable to listen to and consider the argument that they don't have to participate in enforcing the policy, I guess at that point they're pretty much NPCs.
I don't know whether we know anything about how to teach this, other than just telling (and showing, if the opportunity arises), or about what works and what doesn't, but I think this is also what I'd consider the most important goal for education to pursue. I definitely intend to tell my kids, as strongly as possible, "You always can and should ignore the rules to do the right thing, no matter what situation you're in, no matter what anyone tells you. You have to know what the right thing is, and that can be very hard, and good rules will help you figure out what the right thing is much better than you could on your own; but ultimately, it's up to you. There is nothing that can force you to do something you know is wrong."
Thank you. I see where you're coming from, now, and I'll think about it.
One thought is that, in addition to whatever point(s) of departure is/are selected for a piece of fiction, or a video game, such works basically always have some degree of "plot logic" / "game logic", which are common elements that are "unrealistic" (i.e. unlikely in a world that runs on physics), but are convergently helpful for making an entertaining or aesthetically valuable story or game. I don't know what "simulation logic" would be. We can't look at the existing simulations, unless we want to call fiction and games low-fidelity simulations.
I also never feel that great about generalizing from imaginary examples. We don't actually have any ancestor simulations. There's been speculation about them, but I don't think it's clear we (or any intelligent creatures) would actually do such a thing. We do have games, but overall I don't think they have much resemblance to our reality, although of course they usually have some resemblance to certain parts of our reality, and "realism" is sometimes (but far from always!) considered desirable in games.
It does, of course, make sense to say that thought (or, much of it, I'm not sure it's 100%) is itself a highly abstracted simulation of some aspects of reality, and that relating to reality in a straightforward way is necessary for those simulations (usually we'd call them "models") to be useful. So, if we assume that there's a functional purpose to a rational utility maximizer to making a simulation, yes, verisimilitude is to be expected.