Wiki Contributions

Comments

AK6y30
In this example, he told you that you were not in one of the places you're not in (the Vulcan Desert). If he always does this, then the probability is 1/4; if you had been in the Vulcan Desert, he would have told you that you were not in one of the other three.

That can't be right -- if the probability of being in the Vulcan Mountain is 1/4 and the probability of being in the Vulcan Desert (per the guard) is 0, then the probability of being on Earth would have to be 3/4.

AK6y20

I'm not sure about the first case:

if you don't have a VNM utility function, you risk being mugged by wandering Bayesians

I don't see why this is true. While "VNM utility function => safe from wandering Bayesians", it's not clear to me that "no VNM utility function => vulnerable to wandering Bayesians." I think the vulnerability to wandering Bayesians comes from failing to satisfy Transitivity rather than failing to satisfy Completeness. I have not done the math on that.

But the general point, about approximation, I like. Utility functions in game theory (decision theory?) problems normally involve only a small space. I think completeness is an entirely safe assumption when talking about humans deciding which route to take to their destination, or what bets to make in a specified game. My question comes from the use of VNM utility in AI papers like this one: http://intelligence.org/files/FormalizingConvergentGoals.pdf, where agents have a utility function over possible states of the universe (with the restriction that the space is finite).

Is the assumption that an AGI reasoning about universe-states has a utility function an example of reasonable use, for you?

AK6y20

Thanks for this response. On notation: I want world-states, , to be specific outcomes rather than random variables. As such, is a real number, and the expectation of a real number could only be defined as itself: in all cases. I left aside all the discussion of 'lotteries' in the VNM Wikipedia article, though maybe I ought not have done so.

I think your first two bullet points are wrong. We can't reasonably interpret ~ as 'the agent's thinking doesn't terminate'. ~ refers to indifference between two options, so if and ~ , then . Equating 'unable to decide between two options' and 'two options are equally preferable' will lead to a contradiction or a trivial case when combined with transitivity. I can cook up something more explicit if you'd like?

There's a similar problem with ~ meaning 'the agent chooses randomly', provided the random choice isn't prompted by equality of preferences.

This comment has sharpened my thinking, and it would be good for me to directly prove my claims above -- will edit if I get there.