*Disclaimer: this started as a comment to Risk aversion vs. concave utility function but it grew way too big so I turned it into a full-blown article. I posted it to main since I believe it to be useful enough, and since it replies to an article of main.*

## Abstract

When you have to chose between two options, one with a certain (or almost certain) outcome, and another which involves more risk, even if in term of utilons (paperclips, money, ...) the gamble has a higher expectancy, there is always a cost in a gamble : between the time when you take your decision and know if your gamble fails or succeeded (between the time you bought your lottery ticket,and the time the winning number is called), you've less precise information about the world than if you took the "safe" option. That uncertainty may force you to make suboptimal choices during that period of doubt, meaning that "risk aversion" is not totally irrational.

Even shorter : knowledge has value since it allows you to optimize, taking a risk temporary lowers your knoweldge, and this is a cost.

## Where does risk aversion comes from ?

In his (or her?) article,

But if you adjust the bets for the utility, then, if you're a perfect utility maximizer, you should chose the highest expectancy, regardless of the risk involved. Between being sure of getting 10 utilons and having a 0.1 chance of getting 101 utilons (and 0.9 chance to get nothing), you should chose to take the bet. Or you're not rational, says dvasya.

My first objection to it is that we aren't perfect utility maximizer. We run on limited (and flawed) hardware. We have a limited power for making computation. The first problem of taking a risk is that it'll make all further computations much harder. You buy a lottery ticket, and until you know if you won or not, every time you decide what to do, you'll have to ponder things like "if I win the lottery, then I'll buy a new house, so is it really worth it to fix that broken door now ?" Asking yourself all those questions mean you're less Free to Optimize, and will use your limited hardware to ponder those issues, leading to stress, fatigue and less-efficient decision making.

For us humans with limited and buggy hardware, those problems are significant, and are the main reason for which I am personally (slightly) risk-averse. I don't like uncertainty, it makes planning harder, it makes me waste precious computing power in pondering what to do. But that doesn't seem apply to a perfect utility maximizer, with infinite computing power. So, it seems to be a consequence of biases, if not a bias in itself. Is it really ?

## The double-bet of Clippy

So, let's take Clippy. Clippy is a pet paper-clip optimizer, using the utility function proposed by dvasya : *u* = sqrt(*p*), where *p* is the number of paperclips in the room he lives in. In addition to being cute and loving paperclips, our Clippy has lots of computing power, so much he has no issue with tracking probabilities. Now, we'll offer our Clippy to take bets, and see what he should do.

### Timeless double-bet

At the beginning, we put 9 paperclips in the room. Clippy has a utilon of 3. He purrs a bit to show us he's happy of those 9 paperclips, looks at us with his lovely eyes, and hopes we'll give him more.

But we offer him a bet : either we give him 7 paperclips, or we flip a coin. If the coin comes up heads, we give him 18 paperclips. If it comes up tails, we give him nothing.

If Clippy doesn't take the bet, he gets 16 paperclips in total, so *u=4*. If Clippy takes the bet, he has 9 paperclips (*u=3*) with p=0.5 or 9+18=27 paperclips (*u=5.20*) with p=0.5. His utility expectancy is *u=4.10*, so he should take the bet.

Now, regardless of whatever he took the first bet (called B1 starting from now), we offer him a second bet (B2) : this time, he has to pay us 9 paperclips to enter. Then, we roll a 10-sided die. If it gives 1 or 2, we give him a jackpot of 100 paperclips, else nothing. Clippy can be in three states when offered the second deal :

- He didn't take B1. Then, he has 16 clips. If he doesn't take B2, he'll stay with 16 clips, and
*u=4*. If takes B2, he'll have 7 clips with p=0.8 and 107 clips with p=0.2, for an expected utility of*u=4.19*. - He did take B1, and lost it. He has 9 clips. If he doesn't take B2, he'll stay with 9 clips, and
*u=3*. If takes B2, he'll have 0 clips with p=0.8 and 100 clips with p=0.2, for an expected utility of*u=2*. - He did take B1, and won it. He has 27 clips. If he doesn't take B2, he'll stay with 27 clips, and
*u=5.20*. If takes B2, he'll have 18 clips with p=0.8 and 118 clips with p=0.2, for an expected utility of*u=5.57*.

So, if Clippy didn't take the first bet or if he won it, he should take the second bet. If he did take the first bet and lost it, he can't afford to take the second bet, since he's risking a very bad outcome : no more paperclips, not even a single tiny one !

### And the devil "time" comes in...

Now, let's make things a bit more complicated, and realistic. Before we were running things fully sequentially : first we resolved B1, and then we offered and resolved B2. But let's change a tiny bit B1. We don't flip the coin and give the clips to Clippy now. Clippy tells us if he takes B1 or not, but we'll wait one day before giving him the clips if he didn't take the bet, or before flipping the coin and then giving him the clips if he did take the bet.

The utility function of Clippy doesn't involve time, and we'll consider it doesn't change if he gets the clips tomorrow instead of today. So for him, the new B1 is exactly like the old B1.

But now, we offer him B2 **after** Clippy made his choice in B1 (taking the bet or not) but **before** flipping the coin for B1, if he did take the bet.

Now, for Clippy, we only have two situations : he took B1 or he didn't. If he didn't take B1, we are in the same situation than before, with an expected utility of *u=4.19*.

If he did take B1, we have to consider 4 possibilities :

- He loses the two bets. Then he ends up with no paperclip (9+0-9), and is very unhappy. He has
*u=0*utilons. That'll arise with p=0.4. - He wins B1 and loses B2. Then he ends up with 9+18-9 = 18 paperclips, so
*u=4.24*with p=0.4. - He loses B1 and wins B2. Then he ends up with 9-9+100 = 100 paperclips, so
*u=10*with p = 0.1. - He wins both bets. Then he gets 9+18-9+100 = 118 paperclips, so
*u=10.86*with p=0.1.

At the end, if he takes B2, he ends up with an expectancy of *u=3.78*.

So, if Clippy takes B1, he then shouldn't take B2. Since he doesn't know if he won or lost B1, he can't afford the risk to take B2.

But should he take B1 at first ? If, when offered to take B1, he knows he'll be offered to take B2 later on, then he should refuse B1 and take B2, for an utility of 4.19. If, when offered B1, he doesn't know about B2, then taking B1 seems the more rational choice. But once he took B1, **until he knows** if he won or not, he cannot afford to take B2.

### The Python code

For people interested about those issues, here is a simple Python script I used to fine tune that numerical parameters of double-bet issue so my numbers lead to the problem I was pointing to. Feel free to play with it ;)

## A hunter-gatherer tale

If you didn't like my Clippy, despite him being cute, and purring of happiness when he sees paperclips, let's shift to another tale.

Daneel is a young hunter-gatherer. He's smart, but his father committed a crime when he was still a baby, and was exiled from the tribe. Daneel doesn't know much about the crime - no one speaks about it, and he doesn't dare to bring the topic by himself. He has a low social status in the tribe because of that story. Nonetheless, he's attracted to Dors, the daughter of the chief. And he knows Dors likes him back, for she always smiles at him when she sees him, never makes fun of him, and gave him a nice knife after his coming-of-age ceremony.

According to the laws of the tribe, Dors can chose her husband freely, and the husband will become the new chief. But Dors also have to chose a husband that is accepted by the rest of the tribe, if the tribe doesn't accept the leadership, they could revolt, or fail to obey. And that could lead to disaster for the whole tribe. Daneel knows he has to raise his status in the tribe if he wants Dors to be able to chose him.

So Daneel wanders further and further in the forest. He wants to find something new to show the tribe his usefulness. That day, going a bit further than usual, he finds a place which is more humid than the forest the tribe usually wanders in. It has a new kind of trees, he never saw before. Lots of them. And they carry a yellow-red fruit which looks yummy. "I could tell about that place to the others, and bring them a few fruits. But then, what if the fruit makes them sick ? They'll blame me, I'll lose all chances... they may even banish me. But I can do better. I'll eat one of the fruits myself. If tomorrow I'm not sick, then I'll bring fruits to the tribe, and show them where I found them. They'll praise me for it. And maybe Dors will then be able to take me more seriously... and if I get sick, well, everyone gets sick every now and then, just one fruit shouldn't kill me, it won't be a big deal". So Daneel makes his utility calculation (I told you he was smart !), finds a positive outcome. So he takes the risk, he picks one fruit, and eats it. Sweet, a bit acid but not too much. Nice !

Now, Daneel goes back to the tribe. On the way back, he got a rabbit, a few roots and plants for the shaman, an average day. But then, he sees the tribe gathered around the central totem. In the middle of the tribe, Dors with... no... not him... Eto ! Eto is the strongest lad of Daneel's age. He wants Dors too. And he's strong, and very skilled with the bow. The other hunters like him, he's a real man. And Eto's father died proudly, defending the tribe's stock of dried meat against hungry wolves two winters ago. But no ! Not that ! Eto is asking Dors to marry him. In public. Dors can refuse, but if she does with no reason, she'll alienate half of the tribe against her, she can't afford it. Eto is way too popular.

"Hey, Daneel ! You want Dors ? Challenge Eto ! He's strong and good with the bow, but in unarmed combat, you can defeat him, I know it.", whispers Hari, one of the few friends of Daneel.

Daneel starts thinking faster he never did. "Ok, I can challenge Eto in unarmed combat. If I lose, I'll be wounded, Eto won't be nice with me. But he won't kill or cripple me, that would make half of the tribe to hate him. If I lose, it'll confirm I'm physical weak, but I'll also win prestige for daring to defy the strong Eto, so it shouldn't change much. And if I win, Dors will be able to refuse Eto, since he lost a fight against someone weaker than him, that's a huge win. So I should take that gamble... but then, there is the fruit. If the fruit gets me sick, in addition of my wounds from Eto, I may die. Even if I win ! And if I lose, get beaten, and then gets sick... they'll probably let me die. They won't take care of a fatherless lad who lose a fight and then gets sick. Too weak to be worth it. So... should I take the gamble ? If Eto waited just one day more... Or **if only I knew if I'll get sick or not...**"

## The key : information loss

Until Clippy knows ? If Daneel knew ? That's the key of risk aversion, and why a perfect utility maximizer, if he has a concave utility function in at least some aspects, should still have some risk aversion. Because risk comes with information loss. That's the difference between the timeless double-bet and the one with one day of delay for Clippy. Or the problem Daneel got stuck into.

If you take a bet, until you know the outcome of your bet, you'll have less information about the state of the world, and especially about the state that directly concerns you, than if you chose the safe situation (a situation with a lower deviation). Having less information means you're less free to optimize.

Even a perfect utility maximizer can't know what bets he'll be offered, and what decisions he'll have to take, unless he's omniscient (and then he wouldn't take bets or risks, but he would know the future - probability only reflects lack of information). So he has to consider the loss of information of taking a bet.

In real life, the most common case of it is the non-linearity of bad effects : you can lose 0.5L of blood without too much side-effects (drink lots a water, sleep well, and next day you're ok, that's what happens when you go give your blood), but if you lose 2L, you'll likely die. Or if you lose some money, you'll be in trouble, but if you lose the same amount again, you may end up being kicked from you house since you can't pay the rent - and that'll be more than twice as bad as the initial lost.

So when you took a bet, risking to get a bad effect, you can't afford to take another bet (even with, in absolute, a higher gain expectancy), until you know if you won or lose the first bet - because losing them both means death, or being kicked from your house, or ultimate pain of not having any paperclip.

Taking a bet always as a cost : it costs you part of your ability to predict, and therefore to optimize.

## A possible solution

A possible solution to that problem would be to consider all possible decisions you may to take while in the time period when you don't know if you lost or won your first bet, ponder them with the probability of being offered those decisions, and their possible outcomes if you take the first bet and you don't. But how do you compute "their possible outcomes" ? That needs to consider all the possible bets you could be offered during the time required for the resolution of your second bet, and their possible outcomes. So you need to... *stack overflow: maximal recursion depth exceeded.*

Since taking a bet will affect your ability to evaluate possible outcomes in the future, you've a "strange loop to the meta-level", an infinite recursion. Your decision algorithm has to consider the impact the decision will have on the future instances of your decision algorithm.

I don't know if there is a mathematical solution to that infinite recursion that manages to make it converge (like you can in some cases). But the problem looks really hard, and may not be computable.

Just factoring an average "risk aversion" that penalizes outcome which involve a risk (and the more you've to wait to know if you won or lose, the higher the penalty) sounds more a way to fix that problem than a bias.