Sorted by New

# Wiki Contributions

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.

I know Settlers of Catan, and own it. It's been awhile since I last played it, though.

Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there's a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33/34 chance might actually only be 30/34 or less, and then I'd be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.

That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though).

Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?

You seem to have examples in mind?

The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P

I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case - do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don't contrive a \$24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father's farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.)

The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.

Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark would be the person who likes to take risks - just make him subsequently better offers until he eventually loses, and if he doesn't, hit him over the head, take the now substantial amount of money and run). To abandon this strategy just because in this one case it looks as if it is somewhat less profitable might not be effective in the long run. (In other circumstances, people on this site talk about self-modification to counter some expected situations as one-boxing vs. dual-boxing; can we consider this strategy such a self-modification?)

Another useful real-life strategy is, "stay away from stuff you don't understand" - \$24,000 free and clear is easier to grasp than the other offer, so that strategy favors 1A as well, and doesn't apply to 2A vs. 2B because they're equally hard to understand. The framing of offer two also suggests that the two offers might be compared by multiplying percentage and values, while offer 1 has no such suggestion in branch 1A.

We're looking at a hypothetical situation, analysed for an ideal agent with no past and no future - I'm not surprised the real world is more complex than that.

Why would I not hold them responsible? They are the ones who are trying to make us responsible by giving us an opportunity to act, but their opportunities are much more direct - after all, they created the situation that exerts the pressure on us. This line of thought is mainly meant to be argued in Fred's terms, who has a problem with feeling responsible for this suffering (or non-pleasure) - it offers him an out of the conundrum without relinquishing his compassion for humanity (i.e. I feel the ending as written is illogical, and I certainly think "Michael" is acting very unprofessionally for a psychoanalyst). ["Relinquish the compassion" is also the conclusion you seem to have drawn, thus my response here.]

Of course, the alien strategy might not be directed at our sense of responsibility, but at some sort of game theoretic utility function that proposes the greater good for the greater number - these utility functions are always sort of arbitrary (most of them on lesswrong center around money, with no indication why money should be valuable), and the arbitrariness in this case consists in including the alien simulations, but not the aliens themselves. If the aliens are "rational agents", then not rewarding their behaviour will make them stop it if it has a cost, while rewarding it will make them continue. (Haven't you ever wondered how many non-rational entities are trying to pose conundrums to rational agents on here? ;)

I don't have a theory of quantifyable responsibility, and I don't have a definite answer for you. Let's just say there is only a limited amount of stuff we can do in the time that we have, so we have to make choices what to do with our lives. I hope that Fred comes to feel that he can accomplish more with his life than to indirectly die for a tortured simulation that serves alien interests.

The central problem in all of these thought experiments is the crazy notion that we should give a shit about the welfare of other minds simply because they exist and experience things analogously to the way we experience things.

Well, I see the central problem in the notion that we should care about something that happens to other people if we're not the ones doing it to them. Clearly, the aliens are sentient; they are morally responsible for what happens to these humans. While we certainly should pursue possible avenues to end the suffering, we shouldn't act as if we were.

I don't see how your points apply: I would have paid had I lost. Except if my hypothetical self is so much in debt that it can't reasonably spend \$100 on an investment such as this - in which case Omega would have known in advance, and understands my nonpayment.

I do not consider the future existence of Omega as a factor at all, so it doesn't matter whether it self-destructs or not. And it is also a given that Omega is absolutely trustworthy (more than I could say for myself).

My view is that this may well be one of the undecidable theorems that Goedel has shown must exist in any reasonably complex formal system. The only way to make it decidable is to think out of the box, and in this case it means that I consider that someone else is somehow still "me" (at least under ethical aspects) - there are other threads on here that involve splitting myself and still remaining the same person somehow, so it's not intrinsically irrational or anything. My reference to Buddhism was merely meant to show that the concept is mainstream enough to be part of a major world religion, though most other religions and the UN charta of human rights have it as well, though not as pronounced, as "brotherhood" - not a factual, but an ethical identity.

The problem is easier to decide with a small change that also makes it more practical. Suppose two competing laboratories design a machine intelligence and bid for a government contract to produce it. The government will evaluate the prototypes and choose one of them for mass-production (the "winner", getting multiplied); due to the R&D effort involved, the company who fails the bid will go into receivership, and the machine intelligence not chosen will be auctioned off, but never reproduced (the "loser").

The question is: should the developers anticipate mass-production? Should they instruct the machine intelligence to expect mass-production?

Assuming that after the evaluation process, both machine intelligences are turned off, to be turned on again after either mass-production or the auction has occurred, should the machine intelligence expect to be the original, or a copy?

The obvious answer: the developers will rationally both expect mass-production, and teach their machines to expect it, because of the machine intelligences that exist after this process, most will operate under the correct assumption, and only one will need to be taught that this assumption was wrong. The machine ought to expect to be a "winner".