If Omega offered to give you 2^n utils with probability 1/n, what n would you choose?

This problem was invented by Armok from #lesswrong. Discuss.

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 1:36 AM

Pascal's Mugging 2.0?

Muggers do not just randomly give you stuff!

Haha, you're right. I don't know why calling it Pascal's Mugging made sense to me.

Taboo utils. Tell me the offer in terms of my own preferences instead. A utility function is mathematically equivalent to a VNM-consistent agent's preferences, so this should be fine. And the preferences are the underlying physical thing, not "utils".

I think we throw the word "utils" around far too much here without actually intuiting what it means to be talking about a scale-invariant difference of two values of a function that translates from preferences over lotteries to the reals.

The question isn't well-defined. Utility is a measure of value for different states of the world. You can't just "give x utility", you have to actually alter some state of the world, so to be meaningful the question needs to be formulated in terms of concrete effects in the world - lives saved, dollars gained, or whatever.

Humans also seem to have bounded utility functions (as far as they can be said to have such at all), so the "1 utility" needs to be defined so that we know how to adjust for our bounds.

I think this kind of criticism makes sense if only if you postulate that there's some kind of extra, physical restrictions on utilities. Perhaps humans have bounded utility functions, but do all agents? It sure seems like decision theory should be able to handle agents with unbounded utility functions. If this is impossible for some reason, well that's interesting in it's own right. To figure out why it's impossible, we first have to notice our own confusion.

Sure, but the question was "what n would you choose", not "what n would an arbitrary decision-making agent choose".

Imagine you're a papercliper, it's how many paperclips will be created.

In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.

Imagine you're a papercliper, it's how many paperclips will be created.

Not necessarily. The relationship between clips and utility is positive, not necessarily linear.

In something more prone to failure but easier to imagine for some, imagine they are sealed boxes, each containing a few thousand unique people having different and meaningful fun together for eternity.

Thanks, this is better.

One approach would be to figure out the magnitude of the implicit risks that I take all the time. E.g. if a friend offers me a car ride that will save me 15 minutes over taking a train, I tend to take the offer, even though death rates in car rides are larger than in regional trains. While I don't assign death infinite or even maximal negative value (there are obviously many things that would be worse than death), I would very much prefer to avoid it. Whatever the exact probability is for the chance of dying when taking a car, it's low enough that it meets some cognitive cutoff for "irrelevant". I would then pick the N that gives the highest expected value while without having a probability low enough that I would ignore it when assessing the risks of every-day life.

I'm not sure of how good this approach is, but at least it's consistent.

[-][anonymous]13y80

n=1 to maximize the probability I find out how utils are measured.

The problem here is that "learning how utils are measured" may be worth more than one util! So your actual reward would be less than one util because it would have to account for the utils from learning about utils.

On the other hand if we do not know the value of learning about utils (we could get between 0 and 1 utils of information about utils) we end up with a variable number of extra utils between 0 and 1. But if we don't know how many utils it is, we will get very little information out of it, so it's utility is likely to approach one, unless learning how utils work is worth so much that even a tiny fraction of knowledge is worth almost one util.

Okay so let's make this more concrete: Say you opt for n=1. Omega gives you $1. How much would you pay to know that $1 = 1 util? I might pay $20 for that information. So if $1 = 1 util omega has actually given me 21 utils. But Omega is giving me 21 utils, which is 20 more than he promised, a contradiction!

It might be possible to describe this sort of system using differential equations, find some equilibria, and decide where utility is like that but if what you receive ends up being something like "you decide to buy a pet dog" this really isn't that useful.

[-][anonymous]13y00

One non-contradictive way this could happen is that I pick n=1, and then Omega says: "The mere knowledge this outcome carries with it is worth 10 utils to you. I will therefore subject you to five seconds of torture to bring your total utility gained down to 1 util."

If the game only happens once, we might not want to do this. However, if this game is repeated, or if other people are likely to be faced with this decision, then it makes sense to do this the first time. Then we could try to figure out what the optimal value of n is.

To continue with the same example: suppose I found out that this knowledge is worth 10 utils to me. Then I get a second chance to bet. Since I'll never meet Omega again (and presumably never again need to use these units) this knowledge must boost my expected outcome from the bet by 10 utils. We already know that my actions in a state of ignorance are to pick n=1 which has an expected value of 1 util. So my optimal actions ought to be such that my expected outcome is 11 utils, which happens approximately when n=6 (if we can pick non-integer values for n, we can get this result more exactly).

I'm not really sure what's being calculated in that last paragraph there. Knowing the measurement of a single util seems to be valuable OUTSIDE of this problem. Inside the problem, the optimal actions (which is to say the actions with highest expected value) continues to be writing the busy beaver function as fast as possible, &c.

Also, if Omega balances out utils with positive and negative utils, why is e more likely to torture you for five seconds and tell you "this is -9 utils" than, say, torture you for 300 years then grant you an additional 300 years of life in which you have a safe nanofactory and an Iron Man suit?

It seems to me that the vast majority of actions Omega could take would be completely inscrutable, and give us very little knowledge about the actual value of utils.

A better example might be the case in which waiting for one second at a traffic light is worth one util, and after your encounter Omega disappears without a word. Omega then begins circulating a picture of a kitten on the internet. Three years later, a friend of yours links you the picture just before you leave for work. Having to tell them to stop sending you adorable pictures when you're about to leave is the same value as seeing the adorable picture, and the one second later that you get out the door is a second you do not have to spend waiting at a traffic light.

If this is how utils work then I begin to understand why we have to break out the busy beaver function... in order to get an outcome that is akin to $1000 out of this game, you would need to win around 2^20 utils (by my rough and highly subjective estimate). A 5% chance of $1000 is MUCH MUCH better than a guarantee of waiting one second less of waiting at a traffic light.

I seem to have digressed.

n=1 to maximize the probability I find out how utils are measured.

You can't outthink a tautology. You don't care about maximising the probability of finding out how utils are measured more than utils themselves by definition.

[-][anonymous]13y00

True, I am being somewhat flippant. However, I'm being upvoted more than I am when I take my time thinking about a comment, so I must be doing something right.

True, I am being somewhat flippant.

Unless you meant that comma to be a period or perhaps a semi-colon you miss the point. (To Agree with something that is different to what is said is to make an alliance straw man.)

Flippant or not you are signalling - and worse, perpetuating- a common confusion about decision theory.

However, I'm being upvoted more than I am when I take my time thinking about a comment, so I must be doing something right.

"I may be wrong but I am approved of!" A troubling sentiment, albeit practical.

[-][anonymous]13y20

I probably meant that comma to be a period, then. I agree with what you said; I think that I was being flippant in ignoring that point in my first comment.

"I may be wrong but I am approved of!" A troubling sentiment, albeit practical.

I do think that I am approved of because I'm not entirely wrong. Measuring utility is complicated. I think it's possible that a half-serious comment that touches on the issue actually contributes more than a comment that worked out all the complications in depth. Maybe it starts more conversation and makes people think more.

I probably meant that comma to be a period, then.

Thankyou. I value being comprehended. :)

My standard Omega response is Just Say No.

But... as people are saying, it depends entirely on how the utility function is structured. I like the idea of a 1/10th chance of getting 1024 utils, but I think that's because I have ten fingers.

My standard Omega response is Just Say No.

You are the first person I've seen advocate "zero boxing" on Newcomb's problem. ;)

well, I attempted to three box and people got mad.

Does this require carrying around a box with $1000 in it at all times, just in case Omega shows up?

While it's admittedly not that related to this problem, where it's mostly pretty clear what's meant, I'd like to also express my annoyance regarding the use of utils. Given a utility function U, the individual numbers U(A), U(B), etc., are not actually meaningful. Better would be [things that can be expressed in terms of] differences in utility, U(B)-U(A). However really all that's actually meaningful are [things that can be expressed in terms of] things of the form (U(B)-U(A))/(U(C)-U(A)). We really need to be a lot more careful with our uses of this.

If I took the scenario at face value -- in other words, treating the question as one of pure mathematics -- the mathematical answer is of course BusyBeaver(me).

In reality, if I received some sensory input sequence that somehow led me to believe some relevant approximation to the offer was being truthfully made with probability greater than epsilon, then my answer would depend on the actual input sequence, because that would determine the model I had of the probability distribution of resources available to 'Omega' in the event of the offer being at least semi-truthful, and of the relationship between the offered utils and my actual utility function (which is of course itself not formally known). Depending on that input sequence, I could see myself saying "eh, 1" or "umm... the Bekenstein bound of the Hubble radius; if you would be so gracious as to correct my assumption that this is your lab account quota, I would be grateful" or a great many other things.

If I took the scenario at face value -- in other words, treating the question as one of pure mathematics -- the mathematical answer is of course BusyBeaver(me).

I presume that (me) is supposed to be some large number here. This leads to a separate question- is Omega limited to Turing computable calculations? It isn't normally relevant to what Omega can do, but I'll be slightly annoyed if I say to Omega BusyBeaver(1000) and Omega gets embarrassed for having limits.

I think he's saying the largest number he could come up with.

Yes.

I assume that you have to pick n such that n >= 1, right? ;)

[-][anonymous]13y00

Wasted opportunity. Should have just answered. :)

If Omega offered to give you 2^n utils with probability 1/n, what n would you choose?

To the extent the question is meaningful it is mostly a prompt for the definition of 'utility'. With a sophisticated definition of 'utility' the answer is implicit in the question ("arbitrarily big or arbitrarily small"). If the 'utility' concept is taken to exclude preferences regarding risk then the question is arbitrary in the same sense that leaving out any other kind of preference from the utility function would be.

I should note that given my utility function I have a suspicion that particularly large values of n are not just physically impossible to achieve but conceptually impossible. That is, if '1 util' is taken to be "1 dust spec that I can't even feel removed from my eye right now" then even infinite universes could not satisfy me given n of 3^^^3, a googolplex or even a googol. 'Utility' is unbounded but there just isn't anything in preferences that scales that far.

I'd spend as long as I could trying to find the highest possible n.

I've considered this problem with just n utils. Basically, pick the highest finite number you could. And consider how much you missed out on because you didn't square it.

I've considered this problem with just n utils. Basically, pick the highest finite number you could.

If in doubt write really big numbers that look kind of like "9 -> 9 -> 9 -> 9 -> 9". Is there a standard operation notation available that is more ridiculously excessive than Conway Chained Arrow? Preferably one that is easy to express rapidly in a format accepted by Omega.

I suppose I am then faced with a new problem, that of maximising the number that I can express to Omega. Do I stand there chanting Conway Chains at Om in all my waking hours until I die? Do I engage in or fund mathematical research into and popularization of ways in which to more efficiently express stupid numbers? Perhaps I ought to dedicate my entire existence to creating an FAI which would then necessarily dedicate all accessible entropy in the future light cone to answering Omega for me or through a post-human me.

Here I encounter a recursion. It seems that the rational response would be the "attempt to create an FAI to facilitate answering with a bigger n" for exactly the same reason that a big n is good in the first place. A stupidly excessively big payoff makes a tiny chance worthwhile.

If in doubt write really big numbers that look kind of like "9 -> 9 -> 9 -> 9 -> 9". Is there a standard operation notation available that is more ridiculously excessive than Conway Chained Arrow? Preferably one that is easy to express rapidly in a format accepted by Omega.

Pretty much anything involving the busy beaver function. See also Scott Aaronson's "Who can name the bigger number?"

Comment to upvote because I'm karma-greedy. And the dollar and util questions are really quite different; one is about the maths of infinities and the other about discounting large sums of money.

Also, I got the ideas for both questions, I just didn't think anyone could have missed them and were slightly more curious about the money one so I asked that one first.

An isomorphic problem appeared here before. (Second half of the post.)

Similarly this thread and this Wondermark, and sort of implied depending on the position you take on The Lifespan Dilemma.

[-][anonymous]13y00

Thanks for the reminder that the intention of the post is about doing math, and the problem of doing it.

Depends on what a util is. The probability of an event is a pretty well-defined concept, but what a util means to me is free-floating without something to compare it to. If one util is a slice of Wonder bread on an empty stomach for a well-nourished person, then let's go with n = 10.

Why does it matter? How exactly are you choosing n?

Well, imagine if a util were like ten years of constant joy. In that case, I'd rather have n = 1. Similarly, if a util is like finding a penny, I really don't care what n is, but I may as well go with a pretty large one so that if I do "win", I actually notice it. I chose n = 10 because a 10% chance for 1024 slices of Wonder bread on an empty stomach sounds much better than a sure shot for one slice of Wonder bread when I'm hungry (I'd barely even notice that), and also much better than a tiny chance for some ridiculously high number of utilons (I almost certainly won't be able to enjoy it). n = 9 and n = 11 would also be okay choices; I didn't arrive at 10 analytically.

The problem here is that a "util" (insofaras this is meaningful by itself) isn't like any one consistent thing, because by definition it isn't subject to diminishing marginal utility, etc.

I know 1024 slices of Wonder bread isn't 1024 times as useful to a regular hungry person as one slice of Wonder bread. The first slice is the one which the util is defined as, then all the additional utils would be like "something else" that gives exactly as much enjoyment, or just that exact amount of enjoyment but 1023 more times.

I believe you meant to say "probability 1/n".

I suppose that in theory one should pick an n that approaches infinity. It's an affront to our instincts, because we don't have any psychological basis for rewards for which we should not be risk averse, but utility can't decrease in marginal utility.

A high likelihood of some small utility isn't worth even a tiny chance of becoming this guy.

This would depend on whether Omega gave me some reference for what a util was worth, and in what currency it would give me utils. If Omega pays out in dollars, then I only need enough to reach escape velocity (meaning, be able to give up my day job). Then I can generate more utils myself.

This reasoning is like saying, "If Omega offers to give you 2^n additional IQ points with probability 1/n, what n would you choose?" n=6 would be enough for me to start giving myself IQ points.

From the definition of utility, I should prefer the maximum of the function (2^n)/n where n>=1 (the offer is not possible in n<1). That function keeps increasing with n, so I would choose n=infinity, or the maximum value if there was one. However, humans do not have well-defined utility functions, and there is probably a rough maximum to the amount of possible total utility (as in, there is probably nothing that I would prefer an infinitesimal chance of happening than some arbitrary small but positive and guaranteed utility).