[ Question ]

How do bounded utility functions work if you are uncertain how close to the bound your utility is?

by Ghatanathoah3 min read6th Oct 202114 comments

6

Utility FunctionsPascal's MuggingBounded RationalityWorld Modeling
Frontpage

If you are trying to calculate the value of a choice using a bounded utility function, how can you be sure whether you are close or far from the bound, whatever the bound is? How do you account for uncertainty about how much utility you already have? Does this question actually make sense?

Recently I have come across arguments against using a bounded utility function to avoid Pascal’s Mugging and similar “fanaticism” problems.  These arguments, such as Section 6 of Hayden Wilkinson’s paper “In Defense of Fanaticism” and the Less Wrong post “Pascal's Mugging for bounded utility functions” both use a novel argument against bounded utility functions. If I understand them correctly, they argue that bounded utility functions cannot work because it is impossible to know how much utility one already has. This means one cannot know how close to the bound their utility is, and therefore one can never know how much to discount future utility by.

Wilkinson’s paper uses the example of someone with an altruistic bounded utility function that is essentially total utilitarianism.  So they want to increase the total utility of the universe and, because they have a bounded utility function, the value of additional total utility decreases as it approaches some upper bound. If I understand his argument correctly, he is saying that because this agent has a bounded utility function, they cannot calculate how good an action is without knowing lots of details about past events that their actions cannot effect.  Otherwise, how will they know how close they are to the upper bound? 

Wilkinson analogizes this to the “Egyptology” objection to average utilitarianism, where an average utilitarian is compelled to study how happy the Ancient Egyptians were before having children.  Otherwise, they cannot know if having children increases or decreases average utility. Similarly, Wilkinson argues that a total utilitarian with a bounded utility function is compelled to study Ancient Egypt in order to know how close to the bound the total utility of the world is.  This seems implausible, even if information about Ancient Egypt was easy to come by, it seems counterintuitive that it is relevant to what you should do today.

“Pascal's Mugging for bounded utility functions” by Benya introduces a related problem. In this scenario, a person with a bounded utility function has lived an immensely long time in a vast utopia. Because of this, their utility level is very close to the upper bound of their bounded utility function. Pascal’s Mugger approaches them and tells them that all their memories of this utopia are fake and that they have lived for a much shorter time than they believed they had.  The mugger then offers to massively extend their lifespan for $5. The idea is that by creating uncertainty about whether their utility is approaching the bound or not, the mugger can get around the bounded utility function that normally protects from mugging.

One way around this dilemma that seems attractive to me is to use some version of Marc Colyvan’s Relative Expected Value theory. This theory, when looking at two options, compares the differences in utility, rather than the total utility of each option. This would seem to defeat the Egyptology objection, if you cannot change how much utility the events in Ancient Egypt were worth, then you don’t factor them into your calculations when considering how close you are to the bound. Similarly, when facing Pascal’s Mugger in the far future, the person does not need to include all their past utility when considering how to respond to the mugger.  There may be other approaches like this that discount utility that is unaffected in either choice, I am not sure what the best formulation would be.

However, I am worried that this approach might result in problems with transitivity, or change the ranking of values based on how they are bundled.  For example, if an agent with a bounded utility function using Relative Expected Value theory was given offers to play a lottery for $x 1,000 times they might take it each time.  However, they might not pay a thousand times as much to enter a lottery for $1,000x.  Am I mistaken, or is there a way to calibrate or refine this theory to avoid this transitivity problem?

I would love it if someone had an ideas on this topic. I am very confused and do not know if this is a serious problem or if I am just missing something important about how expected utility theory works.

6

New Answer
Ask Related Question
New Comment
14 comments, sorted by Highlighting new comments since Today at 12:14 PM

I don't see any difference in difficulty of evaluation between bounded and unbounded utilities. In practice both are prohibitively difficult, and worrying about utilities in ancient Egyptian culture is both irrelevant and futile regardless of which model you choose. In practice people do not, and even perfectly rational agent idealisations probably should not, have utilities that depend only on snapshot states of the world. Hence most of the relevant theorems about expected utility and decision theory go down in flames except for unrealistic toy problems.

That aside, relative expected value is purely a patch that works around some specific problems with infinite expected values, and gives exactly the same results in all cases with finite expected values.

That aside, relative expected value is purely a patch that works around some specific problems with infinite expected values, and gives exactly the same results in all cases with finite expected values.

That's what I thought as well.  But then it occurred to me that REA might not give exactly the same results in all cases with finite expected values if one has a bounded utility function.  If I am right, this could result in scenarios where someone could have circular values or end up the victim of a money pump.

For example, imagine there is a lottery that costs $1 to for a ticket and generates x utility for odds of y. The value for x is very large, the value for y is quite small, like in Pascal's mugging. A person with a bounded utility function does not enter it.  However, imagine that there is another lottery that costs a penny for a ticket, and generates 0.01x utility for odds of y. Because this person's utility function is bounded, y odds of 0.01x utility is worth a penny to them, even though y odds of x utility is not worth a dollar to them.  The person buys a ticket for a penny.  Then they are offered a chance to buy another.  Because they are using REA, they only count the difference in utility from buying the new ticket, and do not count the ticket they already have, so they buy another.  Eventually they buy 100 tickets.  Then someone offers to buy the tickets from them for a dollar. Because they have a bounded utility function, y odds of winning x are less valuable than a dollar, so they take the trade.  They are now back where they started. 

Does that make sense? Or have I made some sort of error somewhere? (maybe pennies become more valuable the less you have, which would interrupt the cycle?)  It seems like with a bounded utility function, REA might have transitivity problems like that. Or have I made a mistake and misunderstood how to discount using REA?

I am really concerned about this, because REA seems like a nice way to address the Egyptology objection to bounded utility functions. You don't need to determine how much utility already exists in the world by studying Ancient Egypt, because you only take into account the difference in utility, not the total utility, when calculating where the bound is in your utility function.  Ditto for the Pascal's mugging example.  So I really want there to be a way to discount the Egyptology stuff, without also generating intransitive preferences. 

I think you're right that your pennies become more valuable the less you have. Suppose you start with  money and your utility function is . Assuming the original lottery was not worth playing, then , which rearranges to . This can be though of as saying the average slope of the utility function from  to  is greater than some constant .

For the second lottery, each ticket you buy means you have less money. Then the utility cost of the first lottery ticket is , the second , the third, and so on. If the first ticket is worth buying, then  so . This means the average slope of the utility function from  to  is less than the average slope from  to , so if the utility function is continuous, there must be some other point in the interval  where the slope is greater than average. This corresponds to a ticket that is no longer worth buying because it's an even worse deal than the single ticket from the original lottery.

Also note that the value of  is completely arbitrary and irrelevant to the argument, so I think this should still avoid the Egyptology objection.

Thank you for your reply. That was extremely helpful to have someone crunch the numbers. I am always afraid of transitivity problems when considering ideas like this, and I am glad it might be possible to avoid the Egyptology objection without introducing any.

I just thought I'd also comment on this:

maybe pennies become more valuable the less you have, which would interrupt the cycle?

Under the conditions of this scenario and some simplifying assumptions (such as your marginal utility function depending only on how much money you have in each outcome), then they mathematically must become more valuable somewhere between spending $0.01 and spending $1.

Without the simplifying assumptions, you can get counterexamples like someone who gets a bit of a thrill from buying lottery tickets, and who legitimately does attain higher utility from buying 100 tickets for $1 than one big ticket.

Because this person's utility function is bounded, y odds of 0.01x utility is worth a penny to them, even though y odds of x utility is not worth a dollar to them.

So you're talking about cases where (for example) the utility of winning is 1000, the marginal utility of winning 1/100th as much is 11, and this makes it more worthwhile to buy a partial ticket for a penny when it's not worthwhile to buy a full ticket for a dollar?

To me this sounds more like any non-linear utility, not specifically bounded utility.

The person buys a ticket for a penny.  Then they are offered a chance to buy another. Because they are using REA, they only count the difference in utility from buying the new ticket, and do not count the ticket they already have, so they buy another.

No. REA still compares utilities of outcomes, it just does subtraction before averaging over outcomes instead of comparison after.

Specifically, the four outcomes being compared are: spend $0.01 then win 0.01x (with probability y), spend $0.01 then lose (probability 1-y), spend $0.02 then win 0.02x (y), spend $0.02 then lose (1-y).

The usual utility calculation is to buy another ticket when

y U(spend $0.02 then win 0.02x) + (1-y) U(spend $0.02 then lose) > y U(spend $0.01 then win 0.01x) + (1-y) U(spend $0.01 and lose).

REA changes this only very slightly. It says to buy another ticket when

y (U(spend $0.02 then win 0.02x) - U(spend $0.01 then win 0.01x)) + (1-y) (U(spend $0.02 then lose) - U(spend $0.01 then lose)) > 0.

In any finite example, it's easy to prove that they're identical. There is a difference only when there are infinitely many outcomes and the sums on the LHS and RHS of the usual computation don't converge. In some cases, the REArranged sum converges.

There is no difference at all with anyone who has a bounded utility function. The averaging over outcomes always produces a finite result in that case, so the two approaches are identical.

Thanks a lot for the reply. That makes a lot of sense and puts my mind more at ease. 

To me this sounds more like any non-linear utility, not specifically bounded utility.

You're probably right, a lot of my math is shaky.  Let me try to explain the genesis of the example I used.  I was trying to test REA for transitivity problems because I thought that it might have some further advantages to conventional theories.  In particular, it seemed to me that by subtracting before averaging, REA could avoid the two examples those articles I references: 

1. The total utilitarian with a bounded utility function who needs to research how many happy people lived in ancient Egypt to establish how "close to the bound" they were and therefore how much they should discount future utility.  

2. The very long lived egoist with a bounded utility function who vulnerable to Pascal's mugging because they are unsure of how many happy years they have lived already (and therefore how "close to the bound" they were). 

It seemed like REA, by subtracting past utility that they cannot change before doing the calculation, could avoid both those problems. I do not know if those are real problems or if a non-linear/bounded utility with a correctly calibrated discount rate could avoid them anyway, but it seemed worthwhile to find ways around them.  But I was really worried that REA might create intransitivity issues with bounded utility functions, the lottery example I was using was an example of the kind of intransitivity problem that I was thinking of.

It also occurred to me that REA might avoid another peril of bounded utility functions that I read about in this article. Here is the relevant quote:

"if you have a bounded utility function and were presented with the following scary situation: “Heads, 1 day of happiness for you, tails, everyone is tortured for a trillion days” you would (if given the opportunity) increase the stakes, preferring the following situation: “Heads, 2 days of happiness for you, tails, everyone is tortured forever. (This particular example wouldn’t work for all bounded utility functions, of course, but something of similar structure would.)”

It seems like REA might be able to avoid that. If we imagine that the person is given a choice between two coins, since they have to pick one, the "one day of happiness+trillion days of torture" is subtracted beforehand, so all the person needs to do is weigh the difference.  Even if we get rid of the additional complications of computing infinity that "tortured forever" creates, by replacing it with some larger number like "2 trillion days", I think it might avoid it.

But I might be wrong about that, especially if REA always gives the same answers in finite situations. If that's the case it just might be better to find a formulation of an unbounded utility function that does its best to avoid Pascal's Mugging and also the "scary situations" from the article, even if it does it imperfectly.  

Unfortunately REA doesn't change anything at all for bounded utility functions. It only makes any difference for unbounded ones. I don't get the "long lived egoist" example at all. It looks like it drags in a whole bunch of other stuff like path-dependence and lived experience versus base reality to confound basic questions about bounded versus unbounded utility.

I suspect most of the "scary situations" in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.

REA doesn't help at all there, though. You're still computing U(2X days of torture) - U(X days of torture) which can be made as close to zero as you like for large enough X if your utility function is monotonic in X and bounded below.

REA doesn't help at all there, though. You're still computing U(2X days of torture) - U(X days of torture)

I think I see my mistake now, I was treating a bounded utility function using REA as subtracting the "unbounded" utilities of the two choices and then comparing the post-subtraction results using the bounded utility function. It looks like you are supposed to judge each one's utility by the bounded function before subtracting them.

Unfortunately REA doesn't change anything at all for bounded utility functions. It only makes any difference for unbounded ones.

That's unfortunate. I was really hoping that it could deal with the Egyptology scenario by subtracting the unknown utility value of Ancient Egypt and only comparing the difference in utility between the two scenarios.  That way the total utilitarian (or some other type of altruist) with a bounded utility function would not need to research how much utility the people of Ancient Egypt had in order to know how good adding happy people to the present day world is.  That just seems insanely counterintuitive.

I suppose there might be some other way around the Egyptology issue. Maybe if you have a bounded or nonlinear utility function that is sloped at the correct rate it will give the same answer regardless of how happy the Ancient Egyptians were. If they were super happy then the value of whatever good you do in the present is in some sense reduced. But the value of whatever resources you would sacrifice in order to do good is reduced as well, so it all evens out.  Similarly, if they weren't that happy, the value of the good you do is increased, but the value of whatever you sacrifice in order to do that good is increased proportionately.  So a utilitarian can go ahead and ignore how happy the ancient Egyptians were when doing their calculations. 

It seems like this might work if the bounded function has adding happy lives have diminishing returns at a reasonably steady and proportional rate (but not so steady that it is effectively unbounded and can be Pascal's Mugged).

With the "long lived egoist" example I was trying to come up with a personal equivalent to the Egyptology problem. In the Egyptology problem, a utilitarian does not know how close they are to the "bound" of their bounded utility function because they do not know how happy the ancient Egyptians were.  In the long lived egoist example, they do not know how close to the bound they are because they don't know exactly how happy and long lived their past self was.  It also seems insanely counterintuitive to say that, if you have a bounded utility function, you need to figure out exactly how happy you were as a child in order to figure out how good it is for you to be happy in the future.  Again, I wonder if a solution might be to have a bounded utility function with returns that diminish at a steady and proportional rate.

I really still don't know what you mean by "knowing how close to the bound you are". Utility functions are just abstractions over preferences that satisfy some particular consistency properties. If the happiness of Ancient Egyptians doesn't affect your future preferences, then they don't have any role in your utility function over future actions regardless of whether it's bounded or not.

I really still don't know what you mean by "knowing how close to the bound you are".

 

What I mean is, if I have a bounded utility function where there is some value, X, and (because the function is bounded) X diminishes in value the more of it there is, what if I don't know how much X there is? 

For example, suppose I have a strong altruistic preference that the universe have lots of happy people. This preference is not restricted  by time and space, it counts the existence of happy people as a good thing regardless of where or when they exist.  This preference is also agent neutral, it does not matter whether I, personally, am responsible for those people existing and being happy, it is good regardless. This preference is part of a bounded utility function, so adding more happy people starts to have diminishing returns the closer one gets to a certain bound. This allows me to avoid Pascal's Mugging.

However, if adding more people has diminishing returns because the function is bounded, and my preference is not restricted by time, space, or agency, that means that I have no way of knowing what those diminishing returns are unless I know how many happy people have ever existed in the universe.  If there are diminishing returns based on how many people there are, total, in the universe, then the value of adding more people in the future might change depending on how many people existed in the past.

That is what I mean by "knowing how close to the bound" I am. If I value some "X", what if it isn't possible to know how much X there is? (like I said before, a version of this for egoistic preferences might be if the X is happiness over your lifetime, and you don't know how much X there is because you have amnesia or something).

I was hoping that I might be able to fix this issue by making a bounded utility function where X diminishes in value smoothly and proportionately.  So a million happy people in ancient Egypt has proportional diminishing returns to a billion and so on.  So when I am making choices about  maximizing X in the present, the amount of X I get is diminished in value, but it is proportionately diminished, so the decisions that I make remain the same.  If there was a vast population in the past, the amount of X I can generate has very small value according to a bounded utility function. But that doesn't matter because it's all that I can do.

That way, even if X decreases in value the more of it there is, it will not effect any choices I make where I need to choose between different probabilities of getting different amounts of X in the future.  

I suppose I could also solve it by making all of my preferences agent-relative instead of agent-neutral, but I would like to avoid that. Like most people I have a strong moral intuition that my altruistic preferences should be agent-neutral.  I suppose it might also get me into conflict with other agents with bounded agent-relative utility functions if we value the same act differently.

If I am explaining this idea poorly, let me try directing you to some of the papers I am referencing. Besides the one I mentioned in the OP, there is this one by Beckstead and Thomas (pages 16, 17, and 18 are where it discusses it). 

This whole idea seems to be utterly divorced from what utility means. Fundamentally, utility is based on an ordering of preferences over outcomes. It makes sense to say that you don't know what the actual outcomes will be, that's part of decision under risk. It even makes sense to say that you don't know much about the distribution of outcomes, that's decision under uncertainty.

The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying "I don't know what the distribution of outcomes will be", it's phrased as "I don't know what my utility function is".

I think things will be much clearer when phrased in terms of decision making under uncertainty: "I know what my utility function is, but I don't know what the probability distribution of outcomes is".

The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying "I don't know what the distribution of outcomes will be", it's phrased as "I don't know what my utility function is".

I think part of it is that I am conflating two different parts of the Egyptology problem. One part is uncertainty: it isn't possible to know certain facts about the welfare of Ancient Egyptians that might affect how "close to the bound" you are. The other part is that most people have a strong intuition that those facts aren't relevant to our decisions, whether we are certain of them or not. But there's this argument that those facts are relevant if you have an altruistic bounded utility function because they affect how much diminishing returns your function has.

For example, I can imagine that if I was an altruistic immortal who was alive during ancient Egypt, I might be unwilling to trade a certainty of a good outcome in ancient Egypt for an uncertain amazingly terrific outcome in the far future because of my bounded utility function. That's all good, it should help me avoid Pascal's Mugging.  But once I've lived until the present day, it feels like I should continue acting the same way I did in the past, continue to be altruistic, but in a bounded fashion.  It doesn't feel like I should conclude that, because of my achievements as an altruist in Ancient Egypt, that there is less value to being an altruist in the present day.

In the case of the immortal, I do have all the facts about Ancient Egypt, but they don't seem relevant to what I am doing now.  But in the past, in Egypt, I was unwilling to trade certain good outcomes for uncertain terrific ones because my bounded utility function meant I didn't value the larger ones linearly.  Now that the events of Egypt are in the past and can't be changed, does that mean I value everything less?  Does it matter if I do, if the decrease in value is proportionate?  If I treat altruism in the present day as valuable, does that contradict the fact that I discounted that same value back in Ancient Egypt? 

I think that's why I'm phrasing it as being uncertain of what my utility function is. It feels like if I have a bounded utility function, I should be unwilling (within limits) to trade a sure thing for a small possibility of vast utility, thereby avoiding Pascal's Mugging and similar problems. But it also feels like, once I have that sure thing, and the fact that I have it cannot be changed, I should be able to continue seeking more utility, and how many sure things I have accumulated in the past should not change that.

Yes, splitting the confounding factors out does help. There still seem to be a few misconceptions and confounding things though.

One is that bounded doesn't mean small. On a scale where the welfare of the entire civilization of Ancient Egypt counts for 1 point of utility, the bound might still be more than 10^100.

Yes, this does imply that after 10^70 years of civilizations covering 10^30 planet-equivalents, the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they're very altruistic.