All of Scott Garrabrant's Comments + Replies

So, I am trying to talk about the preferences of the couple, not the preferences of either individual. You might reject that the couple is capable of having preference, if so I am curious if you think Bob is capable of having preferences, but not the couple, and if so, why?

I agree if you can do arbitrary utility transfers between Alice and Bob at a given exchange rate, then they should maximize the sum of their utilities (at that exchange rate), and do a side transfer. However, I am assuming here that efficient compensation is not possible. I specifically made it a relatively big decision, so that compensation would not obviously be possible.

1Closed Limelike Curves6mo
Whether the couple is capable of having preferences probably depends on your definition of “preferences.” The more standard terminology for preferences by a group of people is “social choice function.” The main problem we run into is that social choice functions don’t behave like preferences. By Gibbard’s theorem, we can guarantee that any social choice function is either Pareto inefficient or unobservable (because it’s not incentive-compatible). Sometimes, Pareto inefficiency is the price we must pay for people to volunteer information. (e.g. random dictatorship is Pareto-inefficient if we’re risk averse, but it encourages everyone to state their true preferences.) But I don’t see what information we’re getting here. Everyone’s preferences were already known ahead of time; there was no need to choose the inefficient option.

Here are the most interesting things about these objects to me that I think this post does not capture. 

Given a distribution over non-negative non-identically-zero infrafunctions, up to a positive scalar multiple, the pointwise geometric expectation exists, and is an infra function (up to a positive scalar multiple).

(I am not going to give all the math and be careful here, but hopefully this comment will provide enough of a pointer if someone wants to investigate this.)

This is a bit of a miracle. Compare this with arithmetic expectation of utility fun... (read more)

I have been thinking about this same mathematical object (although with a different orientation/motivation) as where I want to go with a weaker replacement for utility functions.

I get the impression that for Diffractor/Vanessa, the heart of a concave-value-function-on-lotteries is that it represents the worst case utility over some set of possible utility functions. For me, on the other hand, a concave value function represents the capacity for compromise -- if I get at least half the good if I get what I want with 50% probability, then I have the capacity... (read more)

Then it is equivalent to the thing I call B2 in edit 2 in the post (Assuming A1-A3).

In this case, your modified B2 is my B2, and your B3 is my A4, which follows from A5 assuming A1-A3 and B2, so your suspicion that these imply C4 is stronger than my Q6, which is false, as I argue here.

However, without A5, it is actually much easier to see that this doesn't work. The counterexample here satisfies my A1-A3, your weaker version of B2, your B3, and violates C4.

Your B3 is equivalent to A4 (assuming A1-3).

Your B2 is going to rule out a bunch of concave functions. I was hoping to only use axioms consistent with all (continuous) concave functions.

2Vanessa Kosoy10mo
Oops. What if instead of "for any p" we go with "there exists p"?

I am skeptical that it will be possible to salvage any nice VNM-like theorem here that makes it all the way to concavity. It seems like the jump necessary to fix this counterexample will be hard to express in terms of only a preference relation.

The answers to Q3, Q4 and Q6 are all no. I will give a sketchy argument here.

Consider the one dimensional case, where the lotteries are represented by real numbers in the interval , and consider the function  given by . Let  be the preference order given by  if and only if .

 is continuous and quasi-concave, which means  is going to satisfy A1, A2, A3, A4, and B2. Further, since  is monotonically increasing up to the unique argmax, and ... (read more)

2Scott Garrabrant10mo
I am skeptical that it will be possible to salvage any nice VNM-like theorem here that makes it all the way to concavity. It seems like the jump necessary to fix this counterexample will be hard to express in terms of only a preference relation.

You can also think of A5 in terms of its contrapositive: For all , if , then for all 

This is basically just the strict version of A4. I probably should have written it that way instead. I wanted to use  instead of , because it is closer to the base definition, but that is not how I was natively thinking about it, and I probably should have written it the way I think about it.

Alex's counterexample as stated is not a counterexample to Q4, since it is in fact concave.
 

I believe your counterexample violates A5, taking , and .

1James Payor10mo
Seems right, oops! A5 is here saying that if any part of my u is flat it had better stay flat! I think I can repair my counterexample but looks like you've already found your own.

That does not rule out your counterexample. The condition is never met in your counterexample.

6AlexMennen10mo
Oh, derp. You're right.

The answer to Q1 is no, using the same counter example here. However, the spirit of my original question lives on in Q4 (and Q6).

Claim: A1, A2, A3, A5, and B2 imply A4.

Proof: Assume we have a preference ordering that satisfies A1, A2, A3, A5, and B2, and consider lotteries , and , with . Let . It suffices to show . Assume not, for the purpose of contradiction. Then (by axiom A1), . Thus by axiom B2 there exists a  such that . By axiom A3, we may assume  for some . Observe that  where  is positive, since otherwise... (read more)

Oh, nvm, that is fine, maybe it works.

Oh, no, I made a mistake, this counterexample violates A3. However, the proposed fix still doesn't work, because you just need a function that is decreasing in probability of , but does not hit 0, and then jumps to 0 when probability of  is 1.

4Scott Garrabrant10mo
Oh, nvm, that is fine, maybe it works.

I haven't actually thought about whether A5 implies A4 though. It is plausible that it does. (together with A1-A3, or some other simple axioms,)

When , we get A4 from A5, so it suffices to replace A4 with the special case that . If , and , a mixture of  and , then all we need to do is have any Y such that , then we can get  between  and  by A3, and then  will also be a mixture of  and , contradicting A5, since .

A1,A2,A3,A5 do ... (read more)

(and everywhere you say "good" and "bad", they are the non-strict versions of the words)

1James Payor10mo
yep!

Your understanding of A4 is right. In A5, "good" should be replaced with "bad."

1James Payor10mo
Okay, I now think A5 implies: "if moving by Δ is good, then moving by any negative multiple −nΔ is bad". Which checks out to me re concavity.
2Scott Garrabrant10mo
(and everywhere you say "good" and "bad", they are the non-strict versions of the words)

You have the inequality backwards. You can't apply A5 when the mixture is better than the endpoint, only when the mixture is worse than the endpoint.

1James Payor10mo
Got it, thanks!

That proposed axiom to add does not work. Consider the function on lotteries over  that gives utility 1 if  is supported, and otherwise gives utility equality to the probability of . This function is concave but not continuous, satisfies A1-A5 and the extra axiom I just proposed, and cannot be made continuous.

4Scott Garrabrant10mo
Oh, no, I made a mistake, this counterexample violates A3. However, the proposed fix still doesn't work, because you just need a function that is decreasing in probability of x, but does not hit 0, and then jumps to 0 when probability of x is 1.

I edited the post to remove the continuity assumption from the main conclusion. However, my guess is that if we get a VNM-like result, we will want to add back in another axiom that gives us continuity,

I meant the conclusions to all be adding to the previous one, so this actually also answers the main question I stated, by violating continuity, but not the main question I care about. I will edit the post to say that I actually care about concavity, even without continuity.

4Scott Garrabrant10mo
I edited the post to remove the continuity assumption from the main conclusion. However, my guess is that if we get a VNM-like result, we will want to add back in another axiom that gives us continuity,

Nice! This, of course, seems like something we should salvage, by e.g. adding an axiom that if A is strictly preferred to B, there should be a lottery strictly between them.

4AlexMennen10mo
I think the way I would rule out my counterexample is by strengthening A3 to if A≻B and B≻C then there is p∈(0,1)...
4Scott Garrabrant10mo
That proposed axiom to add does not work. Consider the function on lotteries over {x,y,z} that gives utility 1 if z is supported, and otherwise gives utility equality to the probability of x. This function is concave but not continuous, satisfies A1-A5 and the extra axiom I just proposed, and cannot be made continuous.

To see why A1-A4 is not enough to prove C4 on its own, consider the preference relation on the space of lotteries between two outcomes X and Y such that all lotteries are equivalent if , and if , higher values of  are preferred. This satisfies A1-A4, but cannot be expressed with a concave function, since we would have to have , contradicting concavity. We can, however express it with a quasi-concave function: .

I believe using A4 (and maybe also A5) in multiple places will be important to proving a positive result. This is because A1, A2, and A3 are extremely week on their own. 

A1-A3 is not even enough to prove C1. To see a counterexample, take any well ordering on , and consider the preference ordering over the space of lotteries on a two element set of deterministic outcomes. If two lotteries have probabilities of the first outcome that differ by a rational number, they are equivalent, otherwise, you compare them according to your well ordering. Th... (read more)

Even if EUM doesn't get "utility", I think it at least gets "utility function", since "function" implies cardinal utility rather than ordinal utility and I think people almost always mean EUM when talking about cardinal utility.

I personally care about cardinal utility, where the magnitude of the utility is information about how to aggregate rather than information about how to take lotteries, but I think this is a very small minority usage of cardinal utility, so I don't think it should change the naming convention very much. 

I think UDT as you specified it has utility functions. What do you mean by doesn't have independence? I am advocating for an updateless agent model that might strictly prefer a mixture between outcomes A and B to either A or B deterministically. I think an agent model with this property should not be described as having a "utility." Maybe I am conflating "utility" with expected utility maximization/VNM and you are meaning something more general? 

If you mean by utility something more general than utility as used in EUM, then I think it is mostly a term... (read more)

2Scott Garrabrant1y
Even if EUM doesn't get "utility", I think it at least gets "utility function", since "function" implies cardinal utility rather than ordinal utility and I think people almost always mean EUM when talking about cardinal utility. I personally care about cardinal utility, where the magnitude of the utility is information about how to aggregate rather than information about how to take lotteries, but I think this is a very small minority usage of cardinal utility, so I don't think it should change the naming convention very much. 

Although I note that my flavor of rejecting utility functions is trying to replace them with something more general, not something incompatible.

I feel like reflective stability is what caused me to reject utility. Specifically, it seems like it is impossible to be reflectively stable if I am the kind of mind that would follow the style of argument given for the independence axiom. It seems like there is a conflict between reflective stability and Bayesian updating. 

I am choosing reflective stability, in spite of the fact that loosing updating is making things very messy and confusing (especially in the logical setting), because reflective stability is that important.

When I lose updating, the independence axiom, and thus utility goes along with it.

4Wei Dai1y
UDT still has utility functions, even though it doesn't have independence... Is it just a terminological issue? Like you want to call the representation of value in whatever the correct decision theory turns out to be something besides "utility"? If so, why?
5Scott Garrabrant1y
Although I note that my flavor of rejecting utility functions is trying to replace them with something more general, not something incompatible.

I think the short statement would be a lot weaker (and better IMO) if "inability" were replaced with "inability or unwillingness". "Inability" is implying a hierarchy where falsifiable statements are better than the poetry, since the only reason why you would resort to poetry is if you are unable to turn it into falsifiable statements.

4[DEACTIVATED] Duncan Sabien1y
I changed it to say "aren't doing so (or can't)."

I would also love a more personalized/detailed description of how I made this list, and what I do poorly. 

I think I have imposter syndrome here. My top guess is that I do actually have some skill in communication/discourse, but my identity/inside view really wants to reject this possibility. I think this is because I (correctly) think of myself as very bad at some of the subskills related to passing people's ITTs.

From listening to that podcast, it seems like even she would not advocate for preferring a lottery between two outcomes to either of the pure components.

This underrated post is pretty good at explaining how to translate between FFSs and DAGs.

Hmm, examples are hard. Maybe the intuitions contribute to concept of edge instantiation?

I note that EU maximization has this baggage of never strictly preferring a lottery over outcomes to the component outcomes, and you steelmen appear to me to not carry that baggage. I think that baggage is actually doing work in some people's reasoning and intuitions.

1rotatingpaguro10mo
I think you are referring to the case where an agent wishes to be unpredictable in an adversarial situation, right? (I genuinely do not feel confident I understand what you said.) If so, isn't this lottery on a different, let's say ontological, level, instead of the level of "lotteries" that define its utility?
2Wei Dai1y
Do you have any examples of this?

I am not sure if there is any disagreement in this comment. What you say sounds right to me. I agree that UDT does not really set us up to want to talk about "coherence" in the first place, which makes it weird to have it be formalized in term of expected utility maximization.

This does not make me think intelligent/rational agents will/should converge to having utility.

4Vladimir_Nesov1y
I think coherence of unclear kind is an important principle that needs a place in any decision theory, and it motivates something other than pure updatelessness. I'm not sure how your argument should survive this. The perspective of expected utility and the perspective of updatelessness both have glaring flaws, respectively unwarranted updatefulness and lack of a coherence concept. They can't argue against each other in their incomplete forms. Expected utility is no more a mistake than updatelessness.

Yeah, I don't have a specific UDT proposal in mind. Maybe instead of "updateless" I should say "the kind of mind that might get counterfactually mugged" as in this example.

FDT and UDT are formulated in terms of expected utility. I am saying that the they advocate for a way of thinking about the world that makes it so that you don't just Bayesian update on your observations, and forget about the other possible worlds.

Once you take on this worldview, the Dutch books that made you believe in expected utility in the first place are less convincing, so maybe we want to rethink utility.

I don't know what the FDT authors were thinking, but it seems like they did not propagate the consequences of the worldview into reevaluating what preferences over outcomes look like.

No, at least probably not at the time that we lose all control. 

However, I expect that systems that are self-transparent and can easily sellf-modify might quickly converge to reflective stability (and thus updatelessness). They might not, but I think the same arguments that might make you think they would develop a utility function also can be used to argue that they would develop updatelessness (and thus possibly also not develop a utility function).

Here is a situation where you make an "observation" and can still interact with the other possible worlds. Maybe you do not want to call this an observation, but if you don't call it an observation, then true observations probably never really happen in practice.

I was not trying to say that is relevant to the coin flip directly. I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts, and so we should open to the idea of agents that refuse to make that move, and thus might not have utility.

1Optimization Process1y
  Ah, that's the crucial bit I was missing! Thanks for spelling it out.

I think UDT is as you say. I think it is also important to clarify that you are not updating on your observations when you decide on a policy. (If you did, it wouldn't really be a function from observations to actions, but it is important to emphasize in UDT.)

Note that I am using "updateless" differently than "UDT". By updateless, I mostly mean anything that is not performing Bayesian updates and forgetting the other possible worlds when it makes observations. UDT is more of a specific proposal. "Updateless" is more of negative property, defined by lack of... (read more)

6Eric Chen1y
Thanks, the clarification of UDT vs. "updateless" is helpful. But now I'm a bit confused as to why you would still regard UDT as "EU maximisation, where the thing you're choosing is policies". If I have a preference ordering over lotteries that violates independence, the vNM theorem implies that I cannot be represented as maximising EU. In fact, after reading Vladimir_Nesov's comment, it doesn't even seem fully accurate to view UDT taking in a preference ordering over lotteries. Here's the way I'm thinking of UDT: your prior over possible worlds uniquely determines the probabilities of a single lottery L, and selecting a global policy is equivalent to choosing the outcomes of this lottery L. Now different UDT agents may prefer different lotteries, but this is in no sense expected utility maximisation. This is simply: some UDT agents think one lottery is the best, other might think another is the best. There is nothing in this story that resembles a cardinal utility function over outcomes that the agents are multiplying with their prior probabilities to maximise EU with respect to. It seems that to get an EU representation of UDT, you need to impose coherence on the preference ordering over lotteries (i.e. over different prior distributions), but since UDT agents come with some fixed prior over worlds which is not updated, it's not at all clear why rationality would demand coherence in your preference between lotteries (let alone coherence that satisfies independence).

You could take as an input parameter to UDT a preference ordering over lotteries that does not satisfy the independence axiom, but is a total order (or total preorder if you want ties). Each policy you can take results in a lottery over outcomes, and you take the policy that gives your favorite lottery. There is no need for the assumption that your preferences over lotteries is vNM.

Note that I don't think that we really understand decision theory, and have a coherent proposal. The only thing I feel like I can say confidently is that if you are convinced by... (read more)

2Eric Chen1y
Okay this is very clarifying, thanks!  If the preference ordering over lotteries violates independence, then it will not be representable as maximising EU with respect to the probabilities in the lotteries (by the vNM theorem). Do you think it's a mistake then to think of UDT as "EU maximisation, where the thing you're choosing is policies"? If so, I believe this is the most common way UDT is framed in LW discussions, and so this would be a pretty important point for you to make more visibly (unless you've already made this point before in a post, in which case I'd love to read it).

Also, if by "have a utility function" you mean something other than "try to maximize expected utility," I don't know what you mean. To me, the cardinal (as opposed to ordinal) structure of preferences that makes me want to call something a "utility function" is about how to choose between lotteries.

1Eric Chen1y
Yeah by "having a utility function" I just mean "being representable as trying to maximise expected utility".

Note that I am not saying here that rational agents can't have a utility function. I am only saying that they don't have to.

That depends on what you mean by "suitably coherent." If you mean they need to satisfy the independence vNM axiom, then yes. But the point is that I don't see any good argument why updateless agents should satisfy that axiom. The argument for that axiom passes through wanting to have a certain relationship with Bayesian updating.

7Scott Garrabrant1y
Also, if by "have a utility function" you mean something other than "try to maximize expected utility," I don't know what you mean. To me, the cardinal (as opposed to ordinal) structure of preferences that makes me want to call something a "utility function" is about how to choose between lotteries.
2Eric Chen1y
Ah okay, interesting. Do you think that updateless agents need not accept any separability axiom at all? And if not, what justifies using the EU framework for discussing UDT agents?  In many discussions on LW about UDT, it seems that a starting point is that agent is maximising some notion of expected utility, and the updatelessness comes in via the EU formula iterating over policies rather than actions. But if we give up on some separability axiom, it seems that this EU starting point is not warranted, since every major EU representation theorem needs some version of separability. 

My take is that the concept of expected utility maximization is a mistake. In Eliezer's Coherent decisions imply consistent utilities, you can see the mistake where he writes:

From your perspective, you are now in Scenario 1B. Having observed the coin and updated on its state, you now think you have a 90% chance of getting $5 million and a 10% chance of getting nothing.

Reflectively stable agents are updateless. When they make an observation, they do not limit their caring as though all the possible worlds where their observation differs do not exist.

As far ... (read more)

3Chris_Leong1y
Are you just referring to the VNM theorems or are there other theorems you have in mind? Note for self: It seems like the independence condition breaks for counterfactual mugging assuming you think we should pay. Assume P is paying $50 and N is not paying, M is receiving $1 million if you would have paid in the counterfactual and zero otherwise. We have N>P but 0.5P+0.5M>0.5N+0.5M in contradiction to independence. The issue is that the value of M is not independent of the choice between P and N.
4Lauro Langosco1y
I'm confused about the example you give. In the paragraph, Eliezer is trying to show that you ought to accept the independence axiom, cause you can be Dutch booked if you don't. I'd think if you're updateless, that means you already accept the independence axiom (cause you wouldn't be time-consistent otherwise). And in that sense it seems reasonable to assume that someone who doesn't already accept the independence axiom is also not updateless.
4NicholasKross1y
I haven't followed this very close, so I'm kinda out-of-the-loop... Which part of UDT/updatelessness says "don't go for the most utility" (no-maximization) and/or "utility cannot be measured / doesn't exist" (no-"foundation of utility", debatably no-consequentialism)? Or maybe "utility" here means something else?
1Optimization Process1y
  This is very surprising to me! Perhaps I misunderstand what you mean by "caring," but: an agent who's made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can't affect; and "not choosing my actions based on X" is how I would define "not caring about X."   1. ^ Aside from "my decisions might be logically-correlated with decisions that agents in those worlds make (e.g. clone-prisoner's-dilemma)," or "I am locked into certain decisions that a CDT agent would call suboptimal, because of a precommitment I made (e.g. Newcomb)" or other fancy decision-theoretic stuff. But that doesn't seem relevant to Eliezer's lever-coin-flip scenario you link to?
4ESRogs1y
Do you expect learned ML systems to be updateless? It seems plausible to me that updatelessness of agents is just as "disconnected from reality" of actual systems as EU maximization. Would you disagree?

I notice that I'm confused. I've recently read the paper "Functional decision theory..." and it's formulated explicitly in terms of expected utility maximization.

What should I read to learn about propositions like "Reflectively stable agents are updateless" and "utility theory is a theoretical mistake"?

6Vladimir_Nesov1y
To ask for decisions to be coherent, there need to be multiple possible situations in which decisions could be made, coherently across these situations or not. A UDT agent that picks a policy faces a single decision in a single possible situation. There is nothing else out there for the decision in this situation to be coherent with. The options offered for the decision could be interpreted as lotteries over outcomes, but there is still only one decision to pick one lottery among them all, instead of many situations where the decision is to pick among a particular smaller selection of lotteries, different in each situation. So asking for coherence means asking what the updateless agent would do if most policies could be suddenly prohibited just before the decision (but after its preference is settled), if it were to update on the fact that only particular policies remained as options, which is not what actually happens.
3Scott Garrabrant1y
Note that I am not saying here that rational agents can't have a utility function. I am only saying that they don't have to.
7Eric Chen1y
Don't updateless agents with suitably coherent preferences still have utility functions?

Thanks, I must have not clicked "submit".

One thing you should be tracking is that I think CalTech and Mudd (and probably others, but I don't remember) require a very large amount of humanities as well as classes in science fields you might not be interested in. You can get almost all the Mudd opportunities with almost none of the requirements at either Pitzer or Pomona. Both these schools (as of 2011, my data might be out of date) have almost no gen-ed requirements, while Mudd has quite a lot. The degree looks worse (At least for Pitzer. Pomona might be comparable), but you can take almost all th... (read more)

I believe this post is (for the most part) accurate and demonstrates understanding of what is going on with logical induction. Thanks for writing (and coding) it!

4Alex Flint1y
Thanks Scott

Note that I tentatively think this will be the last post in the Geometric Rationality sequence.

Load More