Exterminating life is rational

Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?

I don't mean that deciding to exterminate life is rational.  But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.

Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):

Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.

Was this a bad decision?  Well, consider the expected value to the people involved.  Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese.  The loss to them if they ignited the atmosphere would be another 30 or so years of life.  The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.  The loss in being conquered would also be large.  Easy decision, really.

Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it.  Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1.  If I've done my math right, that's ≈ 33,777,000 years.

This supposition seems reasonable to me.  There is a balance between offensive and defensive capability that shifts as technology develops.  If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed.  In the near future, biological weapons will be more able to wipe out life than we are able to defend against them.  We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.

If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially.  The 34M years remaining to life is then in subjective time, and must be mapped into realtime.  If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years.  If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years.  (I don't recommend Ray's figure.  I'm just giving it for those who do.)

Please understand that I am not yet another "prophet" bemoaning the foolishness of humanity.  Just the opposite:  I'm saying this is not something we will outgrow.  If anything, becoming more rational only makes our doom more certain.  For the agents who must actually make these decisions, it would be irrational not to take these risks.  The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.

I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:

  • We can outrun the danger:  We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.

  • Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.

  • People will stop having conflicts.

  • Rational agents incorporate the benefits to others into their utility functions.
  • Rational agents with long lifespans will protect the future for themselves.

  • Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.

  • Independent agents will cease to exist, or to be free (the Singleton scenario).

Let's look at these one by one:

We can outrun the danger.

We will colonize other planets; but we may also  figure out how to make the Sun go nova on demand.  We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.

One problem with this idea is that apocalypses are correlated; one may trigger another.  A disease may spread to another planet.  The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet.  It's not clear whether spreading out and increasing in population actually makes life more safe.  If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.

But neither of those are my final objection.  More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.

Technology will stabilize in a safe state.

Maybe technology will stabilize, and we'll run out of things to discover.  If that were to happen, I would expect that conflicts would increase, because people would get bored.  As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored.  They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.

But that's not my final rejection.  The big problem is that by "safe", I mean really, really safe.  We're talking about bringing existential threats to chances less than 1 in a million per century.  I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

People will stop having conflicts.

That's a nice thought.  A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.

They thought that just before World War I.  But that's not my final rejection.  Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts.  Those that avoid conflict will be out-competed by those that do not.

But that's not my final rejection either.  The bigger problem is that this isn't something that arises only in conflicts.  All we need are desires.  We're willing to tolerate risk to increase our utility.  For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming.  In return for this risk, we can enjoy a better lifestyle now.

Also, we haven't burned all physics textbooks along with all physicists.  Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe.  (In fact, I'd guess the chance is nearer to 1 in 10.)1  We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods.  And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.

The Wikipedia entry on Large Hadcon Collider risk says, "In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole."  The more authoritative "Review of the Safety of LHC Collisions" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.

The LHC conclusions are criminally low.  Their evidence was this: "Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists."  There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.

I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect.  Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet.  I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument.  (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)

The Trinity test was done for the sake of winning World War II.  But the LHC was turned on for... well, no practical advantage that I've heard of yet.  It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit.  And this is  rational, since the LHC will probably improve our lives by more than one part in a million.

Rational agents incorporate the benefits to others into their utility functions.

"But," you say, "I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!"

Well... yes, you would, if you're a rational expectation maximizer.  It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.  (This seems difficult, but is worth exploring.)  If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience.  It doesn't.  It's a 1% increase in your utility.  If you factor the rest of your universe into your utility function, then it's already in there.

The US national debt should be enough to convince you that people act in their self-interest.  Even the most moral people - in fact, especially the "most moral" people - do not incorporate the benefits to others, especially future others, into their utility functions.  If we did that, we would engage in massive eugenics programs.  But eugenics is considered the greatest immorality.

But maybe they're just not as rational as you.  Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth.  Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa.  Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.

It doesn't matter.  Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.

But that will stop you from risking atmospheric ignition to defeat the Nazis, right?  Because you'll incorporate them into your utility function?  Well, that is a subset of the claim "People will stop having conflicts."  See above.

And even if you somehow worked around all these arguments, evolution, again, thwarts you.2  Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.  The claim that rational agents are not selfish implies that rational agents are unfit.

Rational agents with long lifespans will protect the future for themselves.

The most familiar idea here is that, if people expect to live for millions of years, they will be "wiser" and take fewer risks with that time.  But the flip side is that they also have more time to lose.  If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.

Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods.  They may be less willing to take an existential risk for something that will benefit them only temporarily.  But benefits have a way of increasing, not decreasing, over time.  The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.

But that's not my final rejection.  More important is time-discounting.  Agents will time-discount, probably exponentially, due to uncertainty.  If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted.  And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.

Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense.  I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.

Could a continued exponential population explosion outweigh exponential time-discounting?  Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant.  (I leave the details as an exercise to the reader.)

Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting.  You can't stay you forever.  If you change, the future you will be less like you, and weigh less strongly in your utility function.  Objections to this generally assume that it makes sense to trace your identity by following your physical body.  Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea.  And if you don't change, well, what's the point of living?

Evolutionary arguments may help us with self-discounting.  Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function.  The major variable is reproduction rate over lifespan.  This applies to genes or memes.  But they can't help us with time-discounting.

I think there may be a way to make this one work.  I just haven't thought of it yet.

A benevolent singleton will save us all.

This case takes more analysis than I am willing to do right now.  My short answer is that I place a very low expected utility on singleton scenarios.  I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.

I'm not ready to place my faith in a singleton.  I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.

(Please don't conclude from my arguments that you should go out and create a singleton.  Creating a singleton is hard to undo.  It should be deferred nearly as long as possible.  Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)

In conclusion

I think that the figures I've given here are conservative.  I expect existential risk to be much greater than 3/1,000,000 per century.  I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose.  I expect population and technology to continue to increase, and existential risk to be proportional to population times technology.  Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.

Our greatest chance for survival is that there's some other possibility I haven't thought of yet.  Perhaps some of you will.

 

1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what "probability" means.

2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.

272 comments, sorted by
magical algorithm
Highlighting new comments since Today at 9:07 AM
Select new highlight date

Here's a possible problem with my analysis:

Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

How many cards do you draw?

I'm pretty sure that someone who believes in many worlds will keep drawing cards until they die. But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

So this whole post may boil down to "maximizing expected utility" not actually being the right thing to do. Also see my earlier, equally unpopular post on why expectation maximization implies average utilitarianism. If you agree that average utilitarianism seems wrong, that's another piece of evidence that maximizing expected utility is wrong.

Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0U1.

My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.

Does my entire post boil down to this seeming paradox?

(Yes, I assume Omega can actually double utility.)

The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There's also too much change in going from "you die" to "you get utility U0". There's some semantic trickiness there.

Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.

Well, that leaves me even less optimistic than before. As long as it's just me saying, "We have options A, B, and C, but I don't think any of them work," there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can't figure out a way around that math problem, hope is harder.

There's an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can't compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.

That demolishes "Shut up and Multiply", IMO.

SIAI apparently paid Peter to produce that. It should get more attention here.

So if Omega can double your utility an unlimited number of times

This was not assumed, I even explicitly said things like "I take the lottery as many times as Omega has to offer" and "If you really do possess the ability to double utility". To the extent doubling of utility is actually provided (and no more), we should take the lottery.

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply.

How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.

The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Not so. There's also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.

(This is of course not relevant to Peter's model, but if you want to look at the underlying questions, then these strange constructions apply.)

"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

Sorry if this question has already been answered (I've read the comments but probably didn't catch all of it), but...

I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you get the entire (now doubled) utility for those few minutes? Do you get pro rata utility for those few minutes divided by your expected lifespan?

Related to this is the question of the utility penalty of dying. If your utility function includes benefits for other people, then your best bet is to draw cards until you die, because the benefits to the rest of the universe will massively outweigh the inevitability of your death.

If, on the other hand, death sets your utility to zero (presumably because your utility function is strictly only a function of your own experiences), then... yeah. If Omega really can double your utility every time you win, then I guess you keep drawing until you die. It's an absurd (but mathematically plausible) situation, so the absurd (but mathematically plausible) answer is correct. I guess.

Can utility go arbitrarily high? There are diminishing returns on almost every kind of good thing. I have difficulty imagining life with utility orders of magnitude higher than what we have now. Infinitely long youth might be worth a lot, but even that is only so many doublings due to discounting.

I'm curious why it's getting downvoted without reply. Related thread here. How high do you think "utility" can go?

I would guess you're being downvoted by someone who is frustrated not by you so much as by all the other people before you who keep bringing up diminishing returns even though the concept of "utility" was invented to get around that objection.

"Utility" is what you have after you've factored in diminishing returns.

We do have difficulty imagining orders of magnitude higher utility. That doesn't mean it's nonsensical. I think I have orders of magnitude higher utility than a microbe, and that the microbe can't understand that. One reason we develop mathematical models is that they let us work with things that we don't intuitively understand.

If you say "Utility can't go that high", you're also rejecting utility maximization. Just in a different way.

Nothing about utility maximization model says utility function is unbounded - the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0.

If the function is let's say U(x) = 1 - 1/(1+x), U'(x) = (x+1)^-2, then it's a properly behaving utility function, yet it never even reaches 1.

And utility maximization is just a model that breaks easily - it can be useful for humans to some limited extent, but we know humans break it all the time. Trying to imagine utilities orders of magnitude higher than current gets it way past its breaking point.

Nothing about utility maximization model says utility function is unbounded

Yep.

the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0

Utility functions aren't necessarily over domains that allow their derivatives to be scalar, or even meaningful (my notional u.f., over 4D world-histories or something similar, sure isn't). Even if one is, or if you're holding fixed all but one (real-valued) of the parameters, this is far too strong a constraint for non-pathological behavior. E.g., most people's (notional) utility is presumably strictly decreasing in the number of times they're hit with a baseball bat, and non-monotonic in the amount of salt on their food.

Sorry for coming late to this party. ;)

Much of this discussion seems to me to rest on a similar confusion to that evidenced in "Expectation maximization implies average utilitarianism".

As I just pointed out again, the vNM axioms merely imply that "rational" decisions can be represented as maximising the expectation of some function mapping world histories into the reals. This function is conventionally called a utility function. In this sense of "utility function", your preferences over gambles determine your utility (up to an affine transform), so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer".* Given standard assumptions about Omega, this pretty obviously means that you accept the offer.

The confusion seems to arise because there are other mappings from world histories into the reals that are also conventionally called utility functions, but which have nothing in particular to do with the vNM utility function. When we read "I'll double your utility" I think we intuitively parse the phrase as referring to one of these other utility functions, which is when problems start to ensue.

Maximising expected vNM utility is the right thing to do. But "maximise expected vNM utility" is not especially useful advice, because we have no access to our vNM utility function unless we already know our preferences (or can reasonably extrapolate them from preferences we do have access to). Maximising expected utilons is not necessarily the right thing to do. You can maximize any (potentially bounded!) positive monotonic transform of utilons and you'll still be "rational".

* There are sets of "rational" preferences for which such a statement could never be true (your preferences could be represented by a bounded utility function where doubling would go above the bound). If you had such preferences and Omega possessed the usual Omega-properties, then she would never claim to be able to double your utility: ergo the hypothetical implicitly rules out such preferences.

NB: I'm aware that I'm fudging a couple of things here, but they don't affect the point, and unfudging them seemed likely to be more confusing than helpful.

so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer"

It's not that easy. As humans are not formally rational, the problem is about whether to bite this particular bullet, showing a form that following the decision procedure could take and asking if it's a good idea to adopt a decision procedure that forces such decisions. If you already accept the decision procedure, of course the problem becomes trivial.

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

The former doesn't force such decisions at all. That's precisely why I said that it's not useful advice: all it says is that you should take the gamble if you prefer to take the gamble.* (Moreover, if you did not prefer to take the gamble, the hypothetical doubling of vNM utility could never happen, so the set up already assumes you prefer the gamble. This seems to make the hypothetical not especially useful either.)

On the other hand "maximize expected utilons" does provide concrete advice. It's just that (AFAIK) there's no reason to listen to that advice unless you're risk-neutral over utilons. If you were sufficiently risk averse over utilons then a 50% chance of doubling them might not induce you to take the gamble, and nothing in the vNM axioms would say that you're behaving irrationally. The really interesting question then becomes whether there are other good reasons to have particular risk preferences with respect to utilons, but it's a question I've never heard a particularly good answer to.

* At least provided doing so would not result in an inconsistency in your preferences. [ETA: Actually, if your preferences are inconsistent, then they won't have a vNM utility representation, and Omega's claim that she will double your vNM utility can't actually mean anything. The set-up therefore seems to imply that you preferences are necessarily consistent. There sure seem to be a lot of surreptitious assumptions built in here!]

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

[...] you should take the gamble if you prefer to take the gamble

The "prefer" here isn't immediate. People have (internal) arguments about what should be done in what situations precisely because they don't know what they really prefer. There is an easy answer to go with the whim, but that's not preference people care about, and so we deliberate.

When all confusion is defeated, and the preference is laid out explicitly, as a decision procedure that just crunches numbers and produces a decision, that is by construction exactly the most preferable action, there is nothing to argue about. Argument is not a part of this form of decision procedure.

In real life, argument is an important part of any decision procedure, and it is the means by which we could select a decision procedure that doesn't involve argument. You look at the possible solutions produced by many tools, and judge which of them to implement. This makes the decision procedure different from the first kind.

One of the tools you consider may be a "utility maximization" thingy. You can't say that it's by definition the right decision procedure, as first you have to accept it as such through argument. And this applies not only to the particular choice of prior and utility, but also to the algorithm itself, to the possibility of representing your true preference in this form.

The "utilons" of the post linked above look different from the vN-M expected utility because their discussion involved argument, informal steps. This doesn't preclude the topic the argument is about, the "utilons", from being exactly the same (expected) utility values, approximated to suit more informal discussion. The difference is that the informal part of decision-making is considered as part of decision procedure in that post, unlike what happens with the formal tool itself (that is discussed there informally).

By considering the double-my-utility thought experiment, the following question can be considered: assuming that the best possible utility+prior are chosen within the expected utility maximization framework, do the decisions generated by the resulting procedure look satisfactory? That is, is this form of decision procedure adequate, as an ultimate solution, for all situations? The answer can be "no", which would mean that expected utility maximization isn't a way to go, or that you'd need to apply it differently to the problem.

I'm struggling to figure out whether we're actually disagreeing about anything here, and if so, what it is. I agree with most of what you've said, but can't quite see how it connects to the point I'm trying to make. It seems like we're somehow managing to talk past each other, but unfortunately I can't tell whether I'm missing your point, you're missing mine, or something else entirely. Let's try again... let me know if/when you think I'm going off the rails here.

If I understand you correctly, you want to evaluate a particular decision procedure "maximize expected utility" (MEU) by seeing whether the results it gives in this situation seem correct. (Is that right?)

My point was that the result given by MEU, and the evidence that this can provide, both depend crucially on what you mean by utility.

One possibility is that by utility, you mean vNM utility. In this case, MEU clearly says you should accept the offer. As a result, it's tempting to say that if you think accepting the offer would be a bad idea, then this provides evidence against MEU (or equivalently, since the vNM axioms imply MEU, that you think it's ok to violate the vNM axioms). The problem is that if you violate the vNM axioms, your choices will have no vNM utility representation, and Omega couldn't possibly promise to double your vNM utility, because there's no such thing. So for the hypothetical to make sense at all, we have to assume that your preferences conform to the vNM axioms. Moreover, because the vNM axioms necessarily imply MEU, the hypothetical also assumes MEU, and it therefore can't provide evidence either for or against it.*

If the hypothetical is going to be useful, then utility needs to mean something other than vNM utility. It could mean hedons, it could mean valutilons,** it could mean something else. I do think that responses to the hypothetical in these cases can provide useful evidence about the value of decision procedures such as "maximize expected hedons" (MEH) or "maximize expected valutilons" (MEV). My point on this score was simply that there is no particular reason to think that either MEH or MEV were likely to be an optimal decision procedure to begin with. They're certainly not implied by the vNM axioms, which require only that you should maximise the expectation of some (positive) monotonic transform of hedons or valutilons or whatever.*** [ETA: As a specific example, if you decide to maximize the expectation of a bounded concave function of hedons/valutilons, then even if hedons/valutilons are unbounded, you'll at some point stop taking bets to double your hedons/valutilons, but still be an expected vNM utility maximizer.]

Does that make sense?

* This also means that if you think MEU gives the "wrong" answer in this case, you've gotten confused somewehere - most likely about what it means to double vNM utility.

** I define these here as the output of a