2) "Agent simulates predictor"

This basically says that the predictor is a rock, doesn't depend on agent's decision,

True, it doesn't "depend" on the agent's decision in the specific sense of "dependency" defined by currently-formulated UDT. The question (as with any proposed DT) is whether that's in fact the right sense of "dependency" (between action and utility) to use for making decisions. Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.

[EDIT: fixed typo]

Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.

The arguments behind UDT's choice of dependence could prove strong enough to resolve this case as well. The fact that we are arguing about UDT's answer in no way disqualifies UDT's arguments.

My current position on ASP is that reasoning used in motivating it exhibits "explicit dependence bias". I'll need to (and probably will) write another top-level post on this topic to improve on what I've already written here and on the decision theory list.

Another attempt to explain UDT

by cousin_it 2 min read14th Nov 201054 comments


(Attention conservation notice: this post contains no new results, and will be obvious and redundant to many.)

Not everyone on LW understands Wei Dai's updateless decision theory. I didn't understand it completely until two days ago. Now that I had the final flash of realization, I'll try to explain it to the community and hope my attempt fares better than previous attempts.

It's probably best to avoid talking about "decision theory" at the start, because the term is hopelessly muddled. A better way to approach the idea is by examining what we mean by "truth" and "probability" in the first place. For example, is it meaningful for Sleeping Beauty to ask whether it's Monday or Tuesday? Phrased like this, the question sounds stupid. Of course there's a fact of the matter as to what day of the week it is! Likewise, in all problems involving simulations, there seems to be a fact of the matter whether you're the "real you" or the simulation, which leads us to talk about probabilities and "indexical uncertainty" as to which one is you.

At the core, Wei Dai's idea is to boldly proclaim that, counterintuitively, you can act as if there were no fact of the matter whether it's Monday or Tuesday when you wake up. Until you learn which it is, you think it's both. You're all your copies at once.

More formally, you have an initial distribution of "weights" on possible universes (in the currently most general case it's the Solomonoff prior) that you never update at all. In each individual universe you have a utility function over what happens. When you're faced with a decision, you find all copies of you in the entire "multiverse" that are faced with the same decision ("information set"), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight. If you possess some useful information about the universe you're in, it's magically taken into account by the choice of "information set", because logically, your decision cannot affect the universes that contain copies of you with different states of knowledge, so they only add a constant term to the utility maximization.

Note that the theory, as described above, has ho notion of "truth" and "probability" divorced from decision-making. That's how I arrived at understanding it: in The Strong Occam's Razor I asked whether it makes sense to "believe" one physical theory over another which makes the same predictions. For example, is hurting a human in a sealed box morally equivalent to not hurting him? After all, the laws of physics could make a localized exception to save the human from harm. UDT gives a very definite answer: there's no fact of the matter as to which physical theory is "correct", but you refrain from pushing the button anyway, because it hurts the human more in universes with simpler physical laws, which have more weight according to our "initial" distribution. This is an attractive solution to the problem of the "implied invisible" - possibly even more attractive than Eliezer's own answer.

As you probably realize by now, UDT is a very sharp tool that can give simple-minded answers to all our decision-theory puzzles so far - even if they involve copying, amnesia, simulations, predictions and other tricks that throw off our approximate intuitions of "truth" and "probability". Wei Dai gave a detailed example in The Absent-Minded Driver, and the method carries over almost mechanically to other problems. For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall. Note that updating on the knowledge that you are in tails-universe (because Omega showed up) doesn't affect anything, because the theory is "updateless".

At this point some may be tempted to switch to True Believer mode. Please don't. Just like Bayesianism, utilitarianism, MWI or the Tegmark multiverse, UDT is an idea that's irresistibly delicious to a certain type of person who puts a high value on clarity. And they all play so well together that it can't be an accident! But what does it even mean to consider a theory "true" when it says that our primitive notion of "truth" isn't "true"? :-) Me, I just consider the idea very fruitful; I've been contributing new math to it and plan to do so in the future.