Wiki Contributions


The Validity of Self-Locating Probabilities

To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?

There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective of the test subject, it certainly seems like a real question.

Can we bite this bullet? I think so. The key is the word “I” - when the question is asked, the asker doesn’t know which physical entity “I” refers to, so it’s unsurprising that the question seems open even though all the physical facts are known. By analogy, if you were given detailed physical data of the two moons of Mars, and then you were asked “Which one is Phobos and which one is Deimos?”, you might not know the answer, but not because there’s some mysterious extra-physical fact about them.

So far so good, but now we face an even tougher bullet: If we accept quantum many-worlds and/or modal realism (as many LWers do), then we must accept that all probability questions are of this same kind, because there are versions of me elsewhere in the multiverse that experience all possible outcomes.

Unless we want to throw out the notion of probabilities altogether, we’ll need some way of understanding self-location problems besides dismissing them as meaningless. But I think the key is in recognizing that probability is ultimately in the map, not the territory, however real it may seem to us - i.e. it is a tool for a rational agent to achieve its goals, and nothing more.

The Schelling Game (a.k.a. the Coordination Game)

Thinking more about this:

  1. Is it possible to get good at this game?
  2. Does this game teach any useful skills?

I don't think there's a generalized skill of being good at this game as such, but you can get good at it when playing with a particular group, as you become more familiar with their thought processes. Playing the game might not develop any individual's skills, but it can help the group as a whole develop camaraderie by encouraging people to make mental models of each other.

The Schelling Game (a.k.a. the Coordination Game)

I've played a variant like this before, except that only one clue would be active at once - if the clue is neither defeated nor contacted within some amount of time, then we'd move on to another clue, but the first clue can be re-asked later. The amount of state seemed manageable for roadtrips/hikes/etc.

Unconvenient consequences of the logic behind the second law of thermodynamics

Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. ("All possible bitstrings, in order" is not a good law of physics, just because it contains us somewhere).

Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:

  • Description 1: The laws of physics + The Big Bang
  • Description 2: The laws of physics + some arbitrary configuration of particles

Description 1 is much shorter than Description 2, because the Big Bang is much simpler to describe than some arbitrary configuration of particles. Even after the heat-death of the universe, it's still simpler to describe it as "the Big Bang, 10^zillion years on" rather than by exhaustive enumeration of all the particles.

This dispenses with the "paradox" of Boltzmann Brains, and Roger Penrose's puzzle about why the Big Bang had such low entropy despite its overwhelming improbability.

Unconvenient consequences of the logic behind the second law of thermodynamics

Here's the way I understand it: A low-entropy state takes fewer bits to describe, and a high-entropy state takes more. Therefore, a high-entropy state can contain a description of a low-entropy state, but not vice-versa. This means that memories of the state of the universe can only point in the direction of decreasing entropy, i.e. into the past.

Texas Freeze Retrospective: meetup notes

I think the "normal items that helped" category is especially important, because it's costly in terms of money, time, and space to get prepper gear specifically for the whole long tail of possible disasters. If resources are limited, then it's best to focus on buying things that are both useful in everyday life and also are the general kind-of-thing that's useful in disaster scenarios, even if you can't specifically anticipate how.

Texas Freeze Retrospective: meetup notes

Good to know that this was useful. I hadn't thought of this meetup as "journalism," but I suppose it was in a sense.

Interest survey: Forming an MIT Mystery Hunt team (Jan. 15-18, 2021)

You may be right... I just need a rough headcount now, so if you want to take time to ponder the team name feel free to leave it blank now and then submit the form again later with your suggestion. (Edited the form to say so.)

The Solomonoff Prior is Malign

I'm trying to wrap my head around this. Would the following be an accurate restatement of the argument?

  1. Start with the Dr. Evil thought experiment, which shows that it's possible to be coerced into doing something by an agent who has no physical access to you, other than communication.
  2. We can extend this to the case where the agents are in two separate universes, if we suppose that (a) the communication can be replaced with an acausal negotation, with each agent deducing the existence and motives of the other; and that (b) the Earthlings (the ones coercing Dr. Evil) care about what goes on in Dr. Evil's universe.
    • Argument for (a): With sufficient computing power, one can run simulations of another universe to figure out what agents live within that universe.
    • Argument for (b): For example, the Earthlings might want Dr. Evil to write embodied replicas of them in his own universe, thus increasing the measure of their own consciousness. This is not different in kind from you wanting to increase the probability of your own survival - in both cases, the goal is to increase the measure of worlds in which you live.
  3. To promote their goal, when the Earthlings run their simulation of Dr. Evil, they will intervene in the simulation to punish/reward the simulated Dr. Evil depending on whether he does what they (the Earthlings) want.
  4. For his own part, Dr. Evil, if he is using the Solomonoff prior to predict what happens next in his universe, must give some probability to the hypothesis that him being in such a simulation is in fact what explains all of his experiences up till that point (rather than him being a ground-level being). And if that hypothesis is true, then Dr. Evil will expect to be rewarded/punished based on whether he carries out the wishes of the Earthlings. So, he will modify his actions accordingly.
  5. The probability of the simulation hypothesis may be non-negligible, because the Solomonoff prior considers only the complexity of the hypothesis and not that of the computation unfolding from it. In fact, the hypothesis "There is a universe with laws A+B+C, which produces Earthlings who run a simulation with laws X+Y+Z which produces Dr. Evil, but then intervene in the simulation as described in #3" may actually be simpler (and thus more probable) than "There is a universe with laws X+Y+Z which produces Dr. Evil, and those laws hold forever".
Load More