Subjective anticipation as a decision process

8cousin_it

7Wei Dai

1cousin_it

3orthonormal

0Wei Dai

2cousin_it

0Wei Dai

0cousin_it

0Vladimir_Nesov

2cousin_it

0Vladimir_Nesov

2cousin_it

1Vladimir_Nesov

2cousin_it

0Vladimir_Nesov

0cousin_it

0Stuart_Armstrong

0Vladimir_Nesov

0cousin_it

0Vladimir_Nesov

0cousin_it

0Vladimir_Nesov

0Manfred

0cousin_it

0endoself

New Comment

I like your post, but someone who doesn't already know UDT would find it unparseable. The problematic part is this:

subjective anticipation is encoded in the utility function

This assumes the reader already has the non-standard understanding of the term "utility function" that is required by UDT. Namely, up to now I have naively imagined that I have a utility function over my *personal futures*: I enjoy having a high chance of chocolate. But UDT's "utility functions" are very different beasts! UDT demands that you express a preference over futures of the *entire universe*. It requires you to make up your mind whether an additional copy of you getting a slightly inferior piece of chocolate makes you better off, worse off, or indifferent. In effect, this mixes probability and utility into one huge mathematical object that may be much harder to construct or infer by inspecting yourself, though it's certainly easier to reason about afterward.

It's ironic that Wei originally invented UDT in an attempt to figure out anthropic reasoning, but now we can see that it mostly pushes the problem under the rug. No offense to him, UDT is a very solid advance, but we need even more novel insights to figure this stuff out.

It's ironic that Wei originally invented UDT in an attempt to figure out anthropic reasoning, but now we can see that it mostly pushes the problem under the rug.

I did point out

In general, Updateless Decision Theory converts anthropic reasoning problems into ethical problems.

in Torture vs. Dust vs. the Presumptuous Philosopher: Anthropic Reasoning in UDT. So I'm not trying it hide it or anything. :)

Thanks for that link, I'd read it but then forgot about it for some reason. it definitely belongs here :-)

The problem is, I just can't accept the idea that anthropic/ethical problems are "indeterminate" and that we should rely on our "intuitions". It seems that some anthropic problems must have well-defined solutions. If you take part in many successive experiments with observer-splitting, you will observe some limiting frequencies, but we have no clue what these frequencies will be. Do they depend on the thickness of the silicon wafers used to build our computers? Or do they depend on the algorithms running within our brains?

If you take part in many successive experiments with observer-splitting, you will observe some limiting frequencies, but we have no clue what these frequencies will be. Do they depend on the thickness of the silicon wafers used to build our computers? Or do they depend on the algorithms running within our brains?

I'm confused by this comment. From the outside, I can tell you which observers will have remembered which limiting frequencies in their histories. What's an example of an experiment that would tell you something besides this?

The problem is, I just can't accept the idea that anthropic/ethical problems are "indeterminate" and that we should rely on our "intuitions".

I'm not saying that we should *rely* on our intuitions in this matter (which we can't anyway since they are contradictory, or at least mine are, as I pointed out in the above linked post), but they do perhaps offer some starting points for thinking about the problem.

It seems that some anthropic problems must have well-defined solutions.

Well, that's *your* intuition. :)

I'm not sure what you mean by "limiting frequencies" here, but here's a thought experiment attempting to show that they may not be very relevant.

Suppose we take a subject and label him A0. In each round, we take the copy that was previously labeled A0, make two more copies of him, which we'll label B0 and C0, then ask the three of them to guess A or Not-A (without letting them observe their labels). After that, we tell them their labels, and make one more copy of each of the three, to be labeled A1, B1, C1. If A0 guessed correctly, we give A1 a nice experience, for example eating a cake. Do the same for B1 if B0 guessed correctly, and similarly for C1/C0. (The reason for making extra copies after the guess is so A0 doesn't get sick of eating cakes.) At the end of the round, delete B0, C0, A1, B1, C1.

Again, I'm not sure how you define "observed limiting frequency", but it seems that F(A)=1. However you're better off always betting on Not-A, since that results in twice as many of your copies eating cake.

Killing observers to change the limiting frequency is cheating :-)

Consider a simpler example: I flip a coin and show you the result. Then if it came up heads, I kill you, otherwise I repeat the experiment. I think you'd be correct (in some yet-undiscovered sense) to have a "subjective anticipation" of 50% heads and 50% tails before the throw, but counting the surviving branches after many trials gives a "limiting frequency" of mostly tails. This doesn't look to me like a fair interpretation of "limiting frequency", because it arbitrarily throws away all observations made by those of you who ended up dying. If I could resurrect them and include them in the poll, I'd get a different result.

If you resurrect them and include them in the poll, and assuming you average their observed frequencies, don't you still get F(A)=1? As I said, I'm not sure what you mean by "limiting frequency", but I don't see how you can get something other than F(A)=1 in my example.

Averaging observed frequencies sounds weird...

If I count all observer-moments that get told their labels, the fraction of observer-moments that get told A is 1/3. If each observer has a fixed amount of "reality fluid" that gets split in equal parts when copies are made and disappears when copies die, the total fraction of "reality fluid" in observer-moments that get told A is also 1/3, but by a different calculation. Maybe both these methods of counting are wrong, but the answer 1 is still far from certain.

If you take part in many successive experiments with observer-splitting, you will observe some limiting frequencies

No. Different instances of the agent that processed different sequences of observations will observe different limiting frequencies. Some will even observe secret messages from Lords of the Matrix encoded in Morse.

And once again, I cannot understand whether your comment expresses some deep idea or you're missing some obvious point. Flipping a coin is also observer-splitting (in a sense), and we do observe something pretty damn similar to "limiting frequencies" instead of the utter chaos that you seem to predict. Yeah it's true that different instances of you will see different sequences of heads and tails. But it's *not the whole truth*.

An observer-splitting setup that did *not* give rise to subjective limiting frequencies would be something quite new under the sun. I have no idea if it's even possible, yet you sound so certain...

Observing certain frequencies is *probable* and *anticipated*. Normative probability is given by prior, and normative anticipation is calculated from the prior, possibly along the lines of what I described here. The resulting probabilities *explain* the observations we're *likely* to see in the trivial sense of being the theories which hold these observations *probable*. It is an example of a circular justification, an answer to a heuristic "why" question that short-circuits in this particular case where you ask not about a phenomenon with non-trivial definition, but the whole of your experience.

I think you'll agree that there are other versions of yourself that observed all the chaos allowed by the laws of physics. What is the sense in which you're special, compared to them, what is the *regularity* that wants explaining? You're much, much more probable and hence more relevant for decision-making heuristics. You remember *expecting* normality and not chaos, and remember having that expectation met. That expectation was formed under the same considerations that define the corresponding past experiences as probable, even if that probability is logically non-transparent in mere psychological expectation, and becomes apparent mostly in retrospect and on reflection. But there are other instances of yourself out there, unimportant in their measure, that have had some strange experiences not explained by their normative anticipation.

Your explanation still doesn't work for me, I'm afraid.

Do you mean "prior" as part of my mind's software, or "prior" as something ethereal and universal? If the former, how can my tiny brain have beliefs about all elementary particles in the universe, why did evolution build such a thing if robots using ordinary software can survive just fine, and where should I tweak my mind if I want to win the lottery? If the latter, what makes you believe that there *is* such a prior, and isn't this "measure" just reality-fluid by another name, which is a well-known antipattern? Or is there some third alternative that I missed?

The disparity between the level of detail in reality/prior, and imprecision and mutability of psychological anticipation was an open problem for my attack at the problem which I made in autumn (and discussed previously here).

This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with *normative* anticipation. That is, reality is explained as that which we *should* expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.

The multi-level conceptual models that humans build are models of uncertainty, expressing logical uncertainty about the conclusions that should be drawn from past observations. There is only one level of reality in the same sense there is only one mathematical structure behind the many axiomatic definitions that specify it. Reality is, in a sense, what a Bayesian superintelligence would conclude given the knowledge and observations that humans have. But as with morality, we don't have that definition explicitly anywhere, and can only learn more and more detail, and as with morality, the notion is normative, so you can't solve any problems by changing the question ("where should I tweak my mind if I want to win the lottery").

A big question remaining is how do we learn from observations, in what sense do the observations confer knowledge, what is the difference between such knowledge and other kinds of knowledge. This requires facing some problems that UDT avoided by refusing to treat observations as knowledge.

This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with

normativeanticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.

I don't understand how this is different from believing in reality-fluid. If it's the same thing, I cannot accept that. If it's different, could you explain how?

This is an explanation of reality in terms of decision-theoretic heuristics we carry in our heads, as a notion similar to morality and platonic truth. This is of course a mere conceptual step, it doesn't hand you much explanatory power, but I hope it can make reality a bit less mysterious. Like saying that Boeing 747 is made out of atoms, but not pointing out any specific details about its systems.

I don't understand what exactly you refer to by reality-fluid, in what sense you see an analogy, and what problem that points out. The errors and confusions of evaluating one's anticipation in practice have little bearing on how anticipation *should* be evaluated.

This problem is solved by identifying prior (notion of reality) not with explicit data given by psychological anticipation, but with

normativeanticipation. That is, reality is explained as that which we should expect, where the shouldness of expectation is not a line from Litany of Tarski, suggesting how one ought to keep an accurate map of reality, but literally explanation of what reality is.

I don't understand how this is different from believing in reality-fluid. If it's the same thing, I cannot accept that. If it's different, could you explain how?

The inspiration I had was when I was thinking: suppose I am one of a pair of identical copies, and one was going to be killed. Would I trade a chocolate now for two chocolates for the survivor after one of us was killed? Certainly. Would I do the same with a stranger rather than an identical copy? No. These seemed like subjective anticipation type questions, but they can be solved using the appropriate utility function instead.

I'll try and see if all indexical and anticipation problems can be solved in that way...

With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function. And what subjective anticipation is in that context is anyone's guess, but I'd use something like the total measure of the possible worlds that you expect can be possibly controllable by you-that-receives-certain-observations, as this quantity can be used to estimate importance of making optimized decisions from those control sites, as compared to other control sites resulting from receiving alternative observations, which is important in scheduling computational resources for planning for alternative possibilities in advance and coordinating later.

This sense of subjective anticipation also has nothing to do with UDT utility function, although it refers to more than probability distribution, it also needs to establish which you-with-observations can control which possible worlds.

With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function.

No, in the formalism of Wei's original post it's all one giant object which is not necessarily decomposable in the way you suggest. But this is probably splitting hairs.

Tentatively agree with your last paragraph, but need to understand more.

Nah, in the formalism of Wei's original post it's all one giant object.

It doesn't read this way to me. From the post:

More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on. [...]

When it receives an input X, it looks inside the programs P1, P2, P3, ..., and uses its "mathematical intuition" to form a probability distribution P_Y over the set of vectors for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y() U().

U is still utility without probability, and probabilities come from "mathematical intuition", which is separate from utility-assignment, which is what I said:

you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function

Wha? The probability distribution given by math intuition isn't part of the problem statement, it's part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.

When I read your comments, I often don't understand what you understand and what you don't. For the benefit of onlookers I'll try to explain the idea again anyway.

A utility function defined on *vectors of execution histories* may be a weighted sum of utility functions on *execution histories*, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a "prior probability distribution" of 50/50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the *product* of the amounts of chocolate you get in P1 and P2, which isn't so easy to interpret as indexical uncertainty.

When you expect almost complete logical transparency, mathematical intuition won't specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn't play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it's more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.

Any decision problem still has the probability hidden (or not so hidden) in it - this trick works for sleeping beauty not because of a general property, but because the problems are swept under the rug even better than the probabilities.

Since it's the same problem either way (at least if your solution to the decision problem is optimal), then this technique is not useful except as a way to clarify things for yourself, and it doesn't always work.

how many chocolate bars would your putative future self have to be offered, for you to forgo one now?

What do the words "you" and "now" mean in this sentence? I think it should be rephrased but I'm not sure how.

This is more a way to motivate your mind to use decision problem thinking rather than subjective anticipation thinking than a different solution to these problems. The general idea can perhaps be made even more broad: if you can get your mind to treat other people's futures as your subjective anticipation, you will have more motivation to be altruistic.

As argued here, debates about probability can be profitably replaced with decision problems. This often dissolves the debate - there is far more agreement as to what decision sleeping beauty should take than on what probabilities she should use.

The concept of subjective anticipation or subjective probabilities that cause such difficulty here, can, I argue, be similarly replaced by a simple decision problem.

If you are going to be copied, uncopied, merged, killed, propagated through quantum branches, have your brain tasered with amnesia pills while your parents are busy flipping coins before deciding to reproduce, and are hence unsure as to whether you should subjectively anticipated being you at a certain point, the relevant question should not be whether you feel vaguely connected to the putative future you in some ethereal sense.

Instead the question should be akin to: how many chocolate bars would your putative future self have to be offered, for you to forgo one now? What is the tradeoff between your utilities?

Now, altruism is of course a problem for this approach: you might just be very generous with copy #17 down the hallway, he's a thoroughly decent chap and all that, rather than anticipating being him. But humans can generally distinguish between selfish and altruistic decisions, and the setup can be tweaked to encourage the maximum urges towards winning, rather than letting others win. For me, a competitive game with chocolate as the reward would do the trick...

Unlike for the sleeping beauty problem, this rephrasing does not instantly solve the problems, but it does locate them: subjective anticipation is encoded in the utility function. Indeed, I'd argue that subjective anticipation is the same problem as indexical utility, with a temporal twist thrown in.