One of our most controversial posts ever was "Torture vs. Dust Specks". Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE. (Yay student!)

In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.

I'll start by briefly introducing Parfit's Repugnant Conclusion, sort of a little brother to the main dilemma. Parfit starts with a world full of a million happy people - people with plenty of resources apiece. Next, Parfit says, let's introduce one more person who leads a life barely worth living - but since their life *is *worth living, adding this person must be a good thing. Now we redistribute the world's resources, making it fairer, which is also a good thing. Then we introduce another person, and another, until finally we've gone to a billion people whose lives are barely at subsistence level. And since (Parfit says) it's obviously better to have a million happy people than a billion people at subsistence level, we've gone in a circle and revealed inconsistent preferences.

My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of *barely worth living.* In order to *voluntarily create *a new person, what we need is a life that is *worth celebrating* or *worth birthing,* one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to *hear the news *that such a person existed, we shouldn't *kill* them, but we should *not* voluntarily create such a person in an otherwise happy world. So each time we *voluntarily* add another person to Parfit's world, we have a little celebration and say with honest joy "Whoopee!", not, "Damn, now it's too late to uncreate them."

And then the rest of the Repugnant Conclusion - that it's better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating - is just "repugnant" because of standard scope insensitivity. The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations. Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.

But now we introduce the Repugnant Conclusion's big sister, the Lifespan Dilemma, which - at least in my own opinion - seems much worse.

To start with, suppose you have a 20% chance of dying in an hour, and an 80% chance of living for 10^{10,000,000,000} years -

Now I know what you're thinking, of course. You're thinking, "Well, 10^(10^10) years may *sound* like a long time, unimaginably vaster than the 10^15 years the universe has lasted so far, but it isn't much, really. I mean, most finite numbers are very much larger than that. The realms of math are infinite, the realms of novelty and knowledge are infinite, and Fun Theory argues that we'll never run out of fun. If I live for 10^{10,000,000,000} years and then die, then when I draw my last metaphorical breath - not that I'd still have anything like a human body after that amount of time, of course - I'll go out raging against the night, for a life so short compared to all the experiences I wish I could have had. You can't compare that to real immortality. As Greg Egan put it, immortality isn't living for a very long time and then dying. Immortality is just not dying, ever."

Well, I can't offer you *real* immortality - not in *this *dilemma, anyway. However, on behalf of my patron, Omega, who I believe is sometimes also known as Nyarlathotep, I'd like to make you a little offer.

If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. That's 99.9999% of 80%, so I'm just shaving a tiny fraction 10^{-6} off your probability of survival, and in exchange, if you do survive, you'll survive - not ten times as long, my friend, but *ten to the power of *as long. And it goes without saying that you won't run out of memory (RAM) or other physical resources during that time. If you feel that the notion of "years" is ambiguous, let's just measure your lifespan in computing operations instead of years. Really there's not much of a difference when you're dealing with numbers like 10^(10^{10,000,000,000}).

My friend - can I call you friend? - let me take a few moments to dwell on what a wonderful bargain I'm offering you. Exponentiation is a rare thing in gambles. Usually, you put $1,000 at risk for a chance at making $1,500, or some multiplicative factor like that. But when you exponentiate, you pay linearly and buy whole factors of 10 - buy them in wholesale quantities, my friend! We're talking here about 10^{10,000,000,000} factors of 10! If you could use $1,000 to buy a 99.9999% chance of making $10,000 - gaining a single factor of ten - why, that would be the greatest investment bargain in history, too good to be true, but the deal that Omega is offering you is far beyond that! If you started with $1, it takes a mere *eight* factors of ten to increase your wealth to $100,000,000. Three more factors of ten and you'd be the wealthiest person on Earth. Five more factors of ten beyond that and you'd own the Earth outright. How old is the universe? Ten factors-of-ten years. Just ten! How many quarks in the whole visible universe? Around eighty factors of ten, as far as anyone knows. And we're offering you here - why, not even ten billion factors of ten. Ten billion factors of ten is just what you started with! No, this is *ten to the ten billionth power* factors of ten.

Now, you may say that your utility isn't linear in lifespan, just like it isn't linear in money. But even if your utility is *logarithmic *in lifespan - a pessimistic assumption, surely; doesn't money decrease in value faster than life? - why, just the *logarithm* goes from 10,000,000,000 to 10^{10,000,000,000}.

From a fun-theoretic standpoint, exponentiating seems like something that really should let you have Significantly More Fun. If you can afford to simulate a mind a quadrillion bits large, then you merely need 2^(1,000,000,000,000,000) times as much computing power - a quadrillion factors of 2 - to simulate *all possible* minds with a quadrillion binary degrees of freedom so defined. Exponentiation lets you *completely* explore the *whole space *of which you were previously a single point - and that's just if you use it for brute force. So going from a lifespan of 10^(10^10) to 10^(10^(10^10)) seems like it ought to be a significant improvement, from a fun-theoretic standpoint.

And Omega is offering you this special deal, not for a dollar, not for a dime, but one penny! That's right! Act now! Pay a penny and go from a 20% probability of dying in an hour and an 80% probability of living 10^{10,000,000,000} years, to a 20.00008% probability of dying in an hour and a 79.99992% probability of living 10^(10^{10,000,000,000}) years! That's far more *factors of ten* in your lifespan than the number of quarks in the visible universe raised to the millionth power!

Is that a penny, friend? - thank you, thank you. But wait! There's another special offer, and you won't even have to pay a penny for this one - this one is *free!* That's right, I'm offering to exponentiate your lifespan *again,* to 10^(10^(10^{10,000,000,000})) years! Now, I'll have to multiply your probability of survival by 99.9999% again, but really, what's that compared to the nigh-incomprehensible increase in your expected lifespan?

Is that an avaricious light I see in your eyes? Then go for it! Take the deal! It's free!

*(Some time later.)*

My friend, I really don't understand your grumbles. At every step of the way, you seemed eager to take the deal. It's hardly my fault that you've ended up with... let's see... a probability of 1/10^{1000} of living 10^^(2,302,360,800) years, and otherwise dying in an hour. Oh, the ^^? That's just a compact way of expressing tetration, or repeated exponentiation - it's really supposed to be Knuth up-arrows, ↑↑, but I prefer to just write ^^. So 10^^(2,302,360,800) means 10^(10^(10^...^10)) where the exponential tower of tens is 2,302,360,800 layers high.

But, tell you what - these deals *are* intended to be permanent, you know, but if you pay me another penny, I'll trade you your current gamble for an 80% probability of living 10^{10,000,000,000} years.

Why, thanks! I'm glad you've given me your two cents on the subject.

Hey, don't make that face! You've learned something about your own preferences, and that's the most valuable sort of information there is!

Anyway, I've just received telepathic word from Omega that I'm to offer you another bargain - hey! Don't run away until you've at least heard me out!

Okay, I know you're feeling sore. How's this to make up for it? Right now you've got an 80% probability of living 10^{10,000,000,000} years. But right now - for free - I'll replace that with an 80% probability (that's right, 80%) of living 10^^10 years, that's 10^10^10^10^10^10^10^10^{10,000,000,000} years.

See? I thought that'd wipe the frown from your face.

So right now you've got an 80% probability of living 10^^10 years. But if you give me a penny, I'll *tetrate* that sucker! That's right - your lifespan will go to 10^^(10^^10) years! That's an exponential tower (10^^10) tens high! You could write that as 10^^^3, by the way, if you're interested. Oh, and I'm afraid I'll have to multiply your survival probability by 99.99999999%.

*What?* What do you mean, *no?* The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10^{-1000} to purchase it! Well, that and the penny, of course. If you turn down *this* offer, what does it say about that whole road you went down before? Think of how silly you'd look in retrospect! Come now, pettiness aside, this is the real world, wouldn't you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years? Those arrows suppress a lot of detail, as the saying goes! If you can't have Significantly More Fun with tetration, how can you possibly hope to have fun at all?

Hm? Why yes, that's right, I *am* going to offer to tetrate the lifespan and fraction the probability yet again... I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that... oh, don't make that face at me, if you want to refuse the whole garden path you've got to refuse some particular step along the way.

Wait! Come back! I have even faster-growing functions to show you! And I'll take even smaller slices off the probability each time! Come back!

...ahem.

While I feel that the Repugnant Conclusion has an obvious answer, and that SPECKS vs. TORTURE has an obvious answer, the Lifespan Dilemma actually confuses me - the more I demand answers of my mind, the stranger my intuitive responses get. How are yours?

Based on an argument by Wei Dai. Dai proposed a *reductio *of unbounded utility functions by (correctly) pointing out that an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some *sufficiently longer *lifespan. I looked at this and realized that there existed an obvious garden path, which meant that denying the conclusion would create a preference reversal. Note also the relation to the St. Petersburg Paradox, although the Lifespan Dilemma requires only a finite number of steps to get us in trouble.

I think that the answer to this conundrum is to be found in Joshua Greene's dissertation. On page 202 he says:

"The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover ... a piece of moral theory with justificatory force and not a piece of psychological description concerning patterns in people’s emotional responses."When Eliezer presents himself with this dilemma, the neural/hormonal processes in his mind that govern reward and decisionmaking fire "Yes!" on each of a series of decisions that end up, in aggregate, losing him $0.02 for no gain.

Perhaps this is surprising because he implicitly models his "moral intuition" as sampling true statements from some formal theory of Eliezer morality, which he must then reconstruct axiomatically.

But the neural/hormonal decisionmaking/reward processes in the mind are just little bits of biology that squirt hormones around and give us happy or sad feelings according to their own perfectly lawful operation. It is just that if you interpret those... (read more)

If you are not Roko, you should change your username to avoid confusion.

I wonder if the reason you think your answers are obvious is that you learned about scope insensitivity, see the obvious stupidity of that, and then jumped to the opposite conclusion, that life must be valued without any discounting whatsoever.

But perhaps there is a happy middle ground between the crazy kind of moral discounting that humans naively do, and no discounting. And even if that's not the case, if the right answer really does lie on the extreme of the space of possibilities instead of the much larger interior, I don't see how that conclusion could be truly obvious.

In general, your sense of obviousness might to turned up a bit too high. As evidence of that, there were times when I apparently convinced you of the "obvious" correctness or importance of some idea before I'd convinced myself.

--

Sylvie and Bruno, Lewis CarrollOMEGA: If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years

HUMAN: That sounds like an awful lot of time. Would you mind to write it as a decimal number

OMEGA: Here it is... Of course don't expect to read this number in less than 10 ^ 9999999990 years.

HUMAN: Nevermind... So It's such a mind boggling amount of time. If I would get bored or otherwise distressed, loose my lust for life. Am I allowed to kill myself?

OMEGA: Not really. If I'd allow that and assuming that the probability of killing yourself would be 0.000000001 in 10^10 years, then it would be almost sure that you kill yourself by the end of 10^(10^(10^10)) years

HUMAN: This sounds depressing. So my decision has the potential to confining me to grillions of years of suffering, if I'd lost my lust for life.

OMEGA: OK, I see your point, I also offer you some additional drugs to make you happy whenever you would have any distress. I also promise you to modify your brain that you will never even wish to kill yourself during these few eons.

HUMAN: Sounds great,, but I also enjoy your company very much, can I hope you to entertain me from... (read more)

"Drug" was just a catchy phrase for omega's guarantee to cure you out from any psychological issues the could cause you any prolonged distress.

You could insist that it is entirely impossible that you'd need it.

Would not it be a bit overconfident to make

any statementson what is possible to some insanely complex and alien future self of you over a period of time which is measured by a number (in years) that takes billion to the power of billions of your current lifetime just to read?Omega seems to run into some very fundamental credibility problem:

Let us assume the premise of the OP, that lifetime can be equated with discrete computational operations. Furthermore, also assume that the universe (space-time/mulltiverse/whatever) can be modeled result of a computation of n operations (let us say for simplicity n=10^100, we could also assume 10^^100 or any finite number, we will just need a bit more iterations of offers then).

... after some accepted offers ... :

OMEGA: ... I'll replace your p% chance of living for 10^n years, with a 0.9999999p% chance of living 10^(10^n) years...

AGENT: Sounds nice, but I already know, what I would do first with 10^n years.

OMEGA: ???

AGENT: I will simulate my previous universe up to the current point. Inclusive this conversation.

OMEGA: What for?

AGENT: Maybe I am a nostalgic type. But even if I would not be, Given so much computational resources, the probability that I would not do it accidentally would be quite negligible.

OMEGA: Yes, but you could do even more simulations if you would take my next offer. AGENT: Good point, but how can I tell that this conversation is not already taking place in that simulation? Whatever you would t... (read more)

Doesn't many-worlds solve this neatly? Thinking of it as 99.9999999% of the mes sacrificing ourselves so that the other 0.00000001% can live a ridiculously long time makes sense to me. The problem comes when you favor this-you over all the other instances of yourself.

Or maybe there's a reason I stay away from this kind of thing.

There's an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)

Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime's worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.

(Substitute units as necessary for each ever more extreme deal Omega offers.)

No version of yourself loses - each lives - and one gains enormously. So isn't accepting Omega's offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?

Based on my understanding of physics, I have no way to discriminate between a 1/10 chance of 10 simulations and a certainty of one simulation (what do I care whether the simulations are in the same Everett branch or not?). I don't think I would want to anyway; they seem identical to me morally.

Moreover, living 10x as long seems strictly better than having 10x as many simulations. Minimally, I can just forget everything periodically and I am left with 10 simulations running in different times rather than different places.

The conclusion of the garden path seems perfectly reasonable to me.

I would refuse the next step in the garden somewhere between reaching a 75% to 80% chance of not dying in an hour. Going from a 1/5 chance to a 1/4 chance of soon dying is huge in my mind. I'd likely stop at around 79%.

Can someone point me to a discussion as to why bounded utility functions are bad?

Your axiology is arbitrary. Everyone has arbitrary preferences, and arbitrary principles that generate preferences. You are arbitrary - you can either live with that or self-modify into something much less arbitrary like a fitness maximizer, and lose your humanity.

I wonder if this might be repairable by patching the utility function? Suppose you say "my utility function in years of lifespan is logarithmic in this region, then log(log(n)) in this region, then (log(log(log(n)))..." and so on. Perhaps this isn't very bright, in some sense; but it might reflect the way human minds actually deal with big numbers and let you avoid the paradox. (Edit) More generally, you might say "My utility function is inverse whatever function you use to make big numbers." If Omega starts chatting about the Busy Bea... (read more)

At least part of the problem might be that you believe now, fairly high confidence, as an infinite set atheist or for other reasons, that there's a finite amount of fun available but you don't have any idea what the distribution is. If that's the case, then a behavior pattern that always tries to get more life as a path to more fun eventually ends up always giving away life while not getting any more potential fun.

Another possibility is that you care somewhat about the fraction of all the fun you experience, not just about the total amount. If utilities are relative this might be inevitable, though this has serious problems too.

I think I've got a fix for your lifespan-gamble dilemma. Omega (who is absolutely trustworthy) is offering indefinite, unbounded life extension, which means the universe will continue being capable of supporting sufficiently-human life indefinitely. So, the value of additional lifespan is not the lifespan itself, but the chance that during that time I will have the opportunity to create at least one sufficiently-similar copy of myself, which then exceeds the gambled-for lifespan. It's more of a calculus problem than a statistics problem, and involves a lot... (read more)

It's worth noting that this approach has problems of its own. The Stanford Encyclopedia of Philosophy on the Repugnant Conclusion:

... (read more)These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?

As with Torture vs. Specks, the point of this is to expose your decision procedure in a context where you don't have to compare remotely commensurable utilities. Learning about the behavior of your preferences at such an extreme can help illuminate the right thing to do in more plausible contexts. (Thinking through Torture vs. Dust Specks helped mold my thinking on public policy, where it's very tempting to weigh the

salienceof a large benefit to a few people against a small cost to everyone.)EDIT: It's the same heuristic that mathematicians often use when we're pondering a conjecture— we try it in extreme or limiting cases to see if it breaks.

bwahaha. Though my initial thought is "take the deal. This seems actually easier than choosing TORTURE. If you can actually offer up those possibilities at those probabilities, well... yeah."

Unless there's some fun theoretic stuff that suggests that when one starts getting to the really big numbers, fun space seriously shrinks to the point that, even if it's not bounded, grows way way way way way slower than logarithmic... And even then, just offering a better deal would be enough to overcome that.

Again, I'm not certain, but my initial thought is... (read more)

I've read too many articles here, I saw where you were going before I finished this sentence...

I still don't buy the 3^^^3 dust specks dilemma; I think it's because a dust speck in the eye doesn't actually register on the "bad" scale for me. Why not switch it out for 3^^^3 people getting hangnails?

I think I've come up with a way to test the theory that the Repugnant Conclusion is repugnant because of scope insensitivity.

First we take the moral principle the RC is derived from, the Impersonal Total Principle (ITP); which states that all that matters is the total amount of utility* in the world, factors like how it is... (read more)

I have a different but related dilemma.

Omega presents you with the following two choices:

1) You will live for at least 100 years from now, in your 20 year old body, perfect physical condition etc and you may live on later as long as you manage.

2) You will definitely die in this universe within 10 years, but you get a box with 10^^^10 bytes of memory/instructions capacitance. The computer can be programmed in any programming language you'd like (also with libraries to deal with huge numbers, etc.). Although the computer has a limit on the number of operatio... (read more)

The flaws in both of these dilemmas seems rather obvious to me, but maybe I'm overlooking something.

The Repugnant ConclusionFirst of all, I balk at the idea that adding something barely tolerable to a collection of much more wonderful examples is a net gain. If you had a bowl of cherries (and life has been said to be a bowl of cherries, so this seems appropriate) that were absolutely the most wonderful, fresh cherries you had ever tasted, and someone offered to add a recently-thawed frozen non-organic cherry which had been sitting in the back of the fridge... (read more)

I didn't vote your comment down, but I can guess why someone else did. Contradicting the premises is a common failure mode for humans attacking difficult problems. In some cases it is necessary (for example, if the premises are somehow self-contradictory), but even so people fail into that conclusion more often than they should.

Consider someone answering the Fox-Goose-Grain puzzle with "I would swim across" or "I would look for a second boat".

http://en.wikipedia.org/wiki/Fox,_goose_and_bag_of_beans_puzzle

It's a convention about Omega that Omega's reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.

Does that actually mean anything? Is there any number you can say this about where it's both true and worth saying?

It's true of any number, which is why it's funny.

Imagine

Ais the set of all positive integers andBis the set of all positive even integers. You would sayBis smaller thanA. Now multiply every number inAby two. Did you just makeAbecome smaller without removing any elements from it?I read the fun theory sequence, although I don't remember it well. Perhaps someone can re-explain it to me or point me to the right paragraph.

How did you prove that we'll never run out of fun? You did prove that we'll never run out of challenges of the mathematical variety. But challenges are not the same as fun. Challenges are fun when they lead to higher quality of life. I must be wrong, but as far as I understood, you suggested that it would be fun to invent algorithms that sort number arrays faster or prove theorems that have no applications other than... (read more)

The problem goes away if you allow a finite present value for immortality. In other words, there should be a probability level P(T) s.t. I am indifferent between living T periods with probability 1, and living infinitely with probability P(T). If immortality is infinitely valued, then you run into all sorts of ugly reducto ad absurdum arguments along the lines of the one outlined in your post.

In economics, we often represent expected utility as a discounted stream of future flow utilities. i.e.

V = Sum (B^t)(U_t)

In order for V to converge, we need B to b... (read more)

OMEGA: Wait! Come back! I have even faster-growing functions to show you! And I'll take even smaller slices off the probability each time! Come back!

HUMAN: Ahem... If you answer some questions first...

OMEGA: Let's try

HUMAN: Is it really true that the my experience on this universe can be described by a von Neumann machine with some fixed program of n bytes + m bytes of RAM, for some finite values of n and m?

OMEGA: Uh-huh (If omega answers no, then he becomes inconsistent with his previous statement that lifetime and computational resources are equivale... (read more)

I have moral uncertainty, and am not sure how to act under moral uncertainty. But I put a high credence that I would take the EV of year-equivalent (the concept of actual-years probably breaks down when you're a Jupiter Brain). I also put some credence that finite lives are valueless.

Eliezer said:

that seems like quite a big sacrifice to make in order to resolve Parfit's repugnant conclusion; you have abandoned consequentialism in a really big way.

You can get off parfit's conclusion by just rejecting aggregative consequentialism.

This situation gnaws at my intuitions somewhat less than the dust specks.

You're offering me the ability to emulate every possible universe of the complexity of the one we observe, and then some. You're offering to make me a God. I'm listening. What are these faster-growing functions you've got and what are your terms?

I couldn't resist posting a rebuttal to Torture vs. Dust Specks. Short version: the two types of suffering are not scalars that have the same units, so comparing them is not merely a math problem.

Uhm - obvious answer: "Thank you very much for the hint that living forever is indeed a possibility permitted by the fundamental laws of the universe. I think I can figure it out before the lifespan my current odds give me are up, and if I cant, I bloody well deserve to die. Now kindly leave, I really do not intend to spend what could be the last hour of my life haggling."

Mostly, this paradox sets off mental alarm bells that someone is trying to sell us a bill of goods. A lot of the paradoxes that provoke paradoxical responses have this quality.

I would definitely take the first of these deals, and would probably swallow the bullet and continue down the whole garden path . I would be interested to know if Eliezer's thinking has changed on this matter since September 2009.

However, if I were building an AI which may be offered this bet for the whole human species, I would want it to use the Kelly criterion and decline, under the premise that if humans survive the next hour, there may well be bets later that could increase lifespan further. However, if the human species goes extinct at any point, the... (read more)

Summary of this retracted post:

Omega isn't offering an extended lifespan; it's offering an 80% chance of guaranteed death plus a 20% chance of guaranteed death. Before this offer was made, actual immortality was on the table, with maybe a one-in-a-million chance.

I have a hunch that 'altruism' is the mysterious key to this puzzle.

I don't have an elegant fix for this, but I came up with a kludgy decision procedure that would not have the issue.

Problem: you don't want to give up a decent chance of something good, for something even better that's really unlikely to happen, no matter how much better that thing is.

Solution: when evaluating the utility of a probabilistic combination of outcomes, instead of taking the average of all of them, remove the top 5% (this is a somewhat arbitrary choice) and find the average utility of the remaining outcomes.

For example, assume utility is proport... (read more)

I could choose the arbitrary cut-off like 75%, which still buys me 10^^645385211 years of life (in practice, I would be more likely to go with 60%, but that's really a personal preference). Of course, I lose out on the tetration and faster-growth options from refusing that deal, but Omega never mentioned those and I have no particular reason to expect them.

Of course, this starts to get into Pascal's mugging territory, because it really comes down to how much you trust Omega, or the person representing Omega. I can't think of any real-life observations I co... (read more)

Each offer is one I want to accept, but I eventually have to turn one down in order to gain from it. Let's say I don't trust my mind to truly be so arbitrary though, so I use actual arbitrariness. Each offer multiplies my likelihood of survival by 99.9999%, so let's say each time I give myself a 99.999% chance of accepting the next offer (the use of 1 less 9 is deliberate). I'm likely to accept a lot of offers without significant decrease in my likelihood of survival. But wait, I just did better by letting probability choose than if I myself had chosen. Pe... (read more)

What I'm doing wrong? I think that one obviously should be happy with 1<(insert ridiculous amount of zeros)> years for 1 - 1:10^1000 chance of dying within an hour. In a simplistic way of thinking. I could take into account things like "What's going to happen to the rest of all sentient beings", "what's up with humanity after that", and even more importantly, If this offer were to be available for every sentient being, I should assign huge negative utility for chance of all life being terminated due to ridiculously low chance of a... (read more)

... is, I am gratified to see, the same as mine.

When TORTURE v DUST SPECKS was discussed before, some people made suggestions along the following lines: perhaps when you do something to N people the resulting utility change only increases as fast as (something like) the smallest program it takes to output a number as big as N. (No one put it quite like that, which is perhaps just as well since I'm not sure it can be made to make sense. But, e.g., Tom McCabe proposed that if you inflict a dust speck on 3^^... (read more)

Since we're talking about expected utility, I'd rather you answered this old question of mine...

The problem is that using a bounded utility function to prevent this sort of thing will lead to setting arbitrary bounds to how small you want the probability of continuing to live to go, or arbitrary bounds on how long you want to live, just like the arbitrary bounds that people tried to set up in the Dust Specks and Torture case.

On the other hand, an unbounded utility function, as I have said many times previously, leads to accepting a 1/(3^^^3) probability of some good result, as long as it is good enough, and so it results in accepting the Mugging and the Wager and so on.

The original thread is here. A Google search for "wei_dai lesswrong lifetime" found it.

ETA: The solution I proposed is down in the thread here.

If I wanted to be depressing, I'd say that, right now, my utility is roughly constant with respect to future lifespan...

Does the paradox go away if we set U(death) = -∞ utilons (making any increase in the chance of dying in the next hour impossible to overcome)? Does that introduce worse problems?

Perhaps the problem here is that you're assuming that utility(probability, outcome) is the same as probability*utility(outcome). If you don't assume this, and calculate as if the utility of extra life decreased with the chance of getting it, the problem goes away, since no amount of life will drive the probability down below a certain point. This matches intuition better, for me at least.

EDIT: What's with the downvotes ?

In circumstances where the law of large numbers doesn't apply, the utility of a probability of an outcome cannot be calculated from jus... (read more)