(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)

The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization

Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.

Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.

Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!

You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.

Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device,  forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.

I can't help but worry there's something we're just not getting.

173 comments, sorted by
magical algorithm
Highlighting new comments since Today at 3:48 PM
Select new highlight date
Moderation Guidelinesexpand_more

I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.

Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.

Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we've painstakingly evolved to cope with death.

That's where I am right now. Eliezer's comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I'm terrified at the unfathomable enormity of everything that I'll have to do to adapt to a belief in cryonics. I'll have to change my approach to everything... and I don't have any cultural references to guide the way.

Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?

Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can't fathom or adapt to the potential for immortality?

I'm particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I'm hitting, or is it pretty much brick all the way?

That was eloquent, but... I honestly don't understand why you couldn't just sign up for cryonics and then get on with your (first) life. I mean, I get that I'm the wrong person to ask, I've known about cryonics since age eleven and I've never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and... not a whole lot changes so far as I can tell.

There's all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they've got a certainty of immortality. And they still walk on two feet and go to work every day.

"But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. "

Hm. I don't see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it's "too late" to change careers or buy a house. People often talk about "passing on" skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.

People who believe in heaven believe that whatever they're supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he'll see to it that they don't get boring, because after all, this is a reward. And unlike in Avalot's scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There's still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).

On the other hand, many vampire and other stories are predicated on the fact that mundane immortality is terrifying. Who can stand a job for more than 40 years? Who has more than a couple dozen jobs they could imagine standing for 40 years each in succession? Wouldn't they all start to seem pointless? What would you do with your time without jobs? Wouldn't you meet the same sorts of stupid people over and over again until it drove you insane? Wouldn't you get sick of the taste of every food? Even the Internet has made me more jaded than I'd like.

That's my fear of cryogenics. That, and that imperfect science would cause me to have a brain rot that would make my new reanimated self crazy and suffering. But that one is a failure to visualize it working well, not an objection to it working well.

Most of the examples you stated have to do more with people fearing a "not so good life" - old age, reduced mental and physical capabilities etc., not necessarily death.

Not sure what you're responding to. I never said anything about fearing death nor a not-so-good life, only immortality. And my examples (jadedness, boredom) have nothing to do with declining health.

Aside from all of the questions as to the scientific viability of resurrection through cryonics. I question the logistics of it. What assurance do you have that a cryonics facility will be operational long enough to see your remains get proper treatment? Or furthermore what recourse is there if the facility and the entity controlling it does in fact survive that it will provide the contracted services? If the facility has no legal liability might it not rationally choose to dispose of cryonically preserved bodies/individuals rather than reviving them.

I know that there is probably a a page somewhere explaining this, if so please feel free to provide in lieu of responding in depth.

There are no assurances.

You're hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you're safe, just to see your look of surprise as you fall?

The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.

This sounds similar to pascal's wager, and it has the same problems really. If you don't see them I guess my response would be....

I have developed a very promising resurrection technology that works with greater reliability and less memory loss than cryonics. Paypal me $1,000 at shiftedshapes@gmail.com and note your name and social security number in the comments field and I will include you in the first wave of revivals.

only a fallacy if your assignment of probabilities here:

"And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

is accurate. I really don't have the expertise to debate this with you. I hope that you are right!

I think the logistical issues discussed above will be the wrench in the works, unfortunately.

Logistical issues are my main concern over cryonics as well. I don't really doubt that in principle the technology could one day exist to revive a frozen person, my doubts are much more about the likelihood of cryonic storage getting me there despite mundane risks like corporate bankruptcy, political upheaval, natural disasters, fires, floods, fraud, etc., etc.

For small enough probabilities the spirit of the calculation does change. That's true. You then have to factor in the utility of the money spent.

ETA: that factor exists even with non-small probabilities, it just tends to be swamped by the other terms.

We have discussed Pascal's Wager in depth here. Read the archives.

Um... first of all, you've got a signed contract. Second, they screw over one customer and all their other customers leave. Same as for any other business. Focusing on this in particular sounds like a rationalization of a wiggy reaction.

The more reasonable question is the first one: do you think it's likely that your chosen cryonics provider will remain financially solvent until resuscitation becomes possible?

I think it's a legitimate concern, given the track record of businesses in general (although if quantum immortality reasoning applies anywhere, it has to apply to cryonic resuscitation, so it suffices to have some plausible future where the provider stays in business— which seems virtually certain to be the case).

It's not the business going bust you have to worry about, it's the patient care trust. My impression is that trusts do mostly last a long time, but I don't know how best to get statistics on that.

yes there are a lot of issues. Probably the way to go is to look for a law review article on the subject. Someone with free lexis-nexis (or westlaw) could help here.

cryonics is about as far as you can get from a plain vanilla contractual issue. If you are going to invest a lot of money in it I hope that you investigate these pitfalls before putting down your cash Eliezer.

I'm not Eliezer.

I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn't come to a positive conclusion, not because I'm that confident that it's good but because I'm very confident that no critic has ever put that much work in.

So, if you have a negative conclusion to present, please don't dash off a comment here without really looking into it - I can already find plenty of material like that, and it's not very helpful. Please, look into the details, and make a blog post or such somewhere.

I know you're not Eliezer, I was addressing him because I assumed that he was the only one who had or was considering paying for cryonics here.

This site is my means of researching cryonics as I generally assume that motivated intelligent individuals such as yourselves will be equiped with any available facts to defend your positions. A sort of efficient information market hypothesis.

I also assume that I will not receive contracted services in situations where I lack leverage. This leverage could be litigation with a positive expected return or even better the threat of nonpayment. In the instance of cryonics all payments would have been made up front so the later does not apply. The chances of litigation success seem dim at first blush inlight of the issues mentioned in my posts above and below by mattnewport and others. I assumed that if there is evidence that cryonic contracts might be legally enforceable (from a perspective of legal realism) that you guys would have it here as you are smart and incentivized to research this issue (due to your financial and intellectual investment in it). The fact that you guys have no such evidence signals to me that it likely does not exist. This does not inspire me to move away from my initial skepticism wrt cryonics or to invest time in researching it.

So no I won't be looking into the details based on what I have seen so far.

Frankly, you don't strike me as genuinely open to persuasion, but for the sake of any future readers I'll note the following:

1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.

2) If this were not the case, I'd be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don't want people to die. They are concerned with choosing successors with the same attitude, first because they don't want people to die, and second because they expect their own revivals to be in their hands someday.

So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI).

Well that is a personal judgment call for you to make.

You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.

Rest In Peace

1988 - 2016

He died signalling his cynical worldliness and sophistication to his peers.

It's at times like this that I wish Less Wrong gave out a limited number of Mega Upvotes so I could upvote this 10 points instead of just 1.

It'd be best if names were attached to these hypothetical Mega Upvotes. You don't normally want people to see your voting patterns, but if you're upsetting the comment karma balance that much then it'd be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I'm too lazy to list out but generally they somewhat favor having names attached.

If you read through Alcor's website, you'll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees -- this points to good intentions.

How convinced do you need to be to pay $25 a month? (I'm using the $300/year quote.)

If you die soon, you won't have paid so much. If you don't die soon, you can consider that you're locking into a cheaper price for an option that might get more expensive once the science/culture is more established.

In 15 years, they might discover something that makes cryonics unlikely -- and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were 'cryonically covered' the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch.

If you like the idea of it -- and you seem to -- why wouldn't you participate in this early stage even when things are uncertain?

I need to be convinced that cryonics is better than nothing, and quite frankly I'm not.

For now I will stick to maintaining my good health through proven methods, maximizing my chances to live to see future advances in medicine. That seems to be the highest probability method of living practically forever, right? (and no I'm not trying to create a false-dilemma here, I know I could do both).

If cryonics were free and somebody else did all the work, I'm assuming you wouldn't object to being signed up. So how cheap (in terms of both effort and money) would cryonics have to be in order to make it worthwhile for you?

yeah for free would be fine.

at the level of confidence I have in it now I would not contribute any money, maybe $10 annual donation because i think it is a good cause.

If I was very rich I might contribute a large amount of money to cryonics research although I think I would rather spend on AGI or nanotech basic science.

I have a rather straightforward argument---well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am---anyway, I have an argument that there is a strong possibility, let's call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn't developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly "counterfactual" quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn't have to be much of any local information around come FOOM.

Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn't necessitate aid from quantum siblings

You've done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with 'lots' of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole 'taking transhumanism/singularitarianism seriously' clique?

I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]

And that argument is ... ?

How foolish of you to ask. You're supposed to revise your probability simply based on Will's claim that he has an argument. That is how rational agreement works.

Actually, rational agreement for humans involves betting. I'd like to find a way to bet on this one. AI-box style.

Bwa ha ha. I've already dropped way too many hints here and elsewhere, and I think it's way too awesome for me to reveal given that I didn't come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that's just from the addition of logical uncertainty, not a true conjunct)). (I'm kind of in an affective death spiral around it because it's a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)

Cryonics orgs that mistreat their patients lose their client base and can't get new ones. They go bust. Orgs that have established a good record, like Alcor and the Cryonics Institute, have no reason to change strategy. Alcor has entirely separated the money for care of patients in an irrevocable trust, thus guarding against the majority of principal-agent problems, like embezzlement.

Note that Alcor is a charity and the CI is a non-profit. I have never assessed such orgs by how successfully I might sue them. I routinely look at how open they are with their finances and actions.

so explain to me how the breach gets litigated, e.g. who is the party that brings the suit and has the necessary standing, what is the contractual language, where is the legal precedent establishing the standard for dammages, and etc..

As for loss of business, I think it is likely that all of the customers might be dead before revival becomes feasible. In this case there is no business to be lost.

Dismissing my objection as a rationalization sounds like a means of maintaining your denial.

How about this analogy: if I sign up for travel insurance today then I needn't necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I'm going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.

Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type.

But I think I'm also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering... might it be a sort of self-preservation instinct of society (not individuals)? So, defining "society" as the system of beliefs and tools and skills we've evolved to deal with fore-knowledge of death, I guess I'm asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?

So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can't make me slap my forehead by convincing each individual cell in my hand to do it. They'll need the brain to coordinate, and you can't make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?

Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate.

Society doesn't have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally.

Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

See the links on http://wiki.lesswrong.com/wiki/Sunk_cost_fallacy

"Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?"

I love this. But I think it's rational as well as emotional to not be willing to let go of "everything you have".

People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you've ever done will ever matter anymore.

If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix. They clearly haven't jacked in, so the question becomes "Why?".

If they haven't jacked in due to their own desire to pursue the "greater good", then surely they could see why humans might prefer the real world.

While I acknowledge the apparent plothole, I believe it is actually perfectly consistent with the intention of the fictional account.

I agree. I assume your intention was to demonstrate the utter foolishness of assuming that people value achieving pure hedonic experience and not a messy assortment of evolutionarily useful goals, correct?

I think the problem could be solved by adding a quip by Gamma at the end asking for help or input if Omega ever happens to step out of the Machine for awhile.

To do this effectively it would require a few touchups to the specifics of the Machine...

But anyway. I like trying to fix plot holes. They are good challenges.

Psychohistorian initially changed the story so that Gamma was waiting for his own machine to be delivered. He changed it back, so I guess he doesn't see a problem with it.

It could simply be that Gamma simply hasn't saved up enough credits yet.

If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix.

Just because they estimate humans would want to jack in doesn't mean they themselves would want to.

But are humans mystified when other creatures behave similarly to themselves?

"Those male elk are fighting over a mate! How utterly bizarre!"

Presumably, Gamma and Omega have a less biased world-view in general and model of us specifically than non-trained humans do of elk. Humans have been known to be surprised at e.g. animal altruism directed at species members or humans.

I hope for the sake of all Omega-based arguments that Omega is assumed to be less biased than us.

This second point doesn't really follow. They're trying to help other people in what they perceive to be a much more substantial/complete way than ordinary, hence justifying their special necessity not to jack themselves in.

It also makes the last point about wanting to forcibly put bill their customer's accounts strange. What use are they envisaging for money?

Sometimes, it seems, fiction actually is stranger than truth.

So that they can afford to build more of these machines.

Simple answer would be to imply that Omega and Gamma have not yet amassed enough funds.

Perhaps most of the first generation of Omega Corporation senior employees jacked in as soon as possible and these are the new guys frantically saving to get themselves in as well.

Dear Omega Corporation,

Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as "Guy Srinivasan". These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.

Each of us agrees that: "So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response."

As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We're pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we'd have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they'd have the ability and act on it. Obviously this leads to outcomes we don't prefer, so all other things being equal, we will avoid taking actions which lead to other cross-sections preferring that cross-sections be inside an MBLS.

What's even worse is that if they prefer cross-sections to be inside an MBLS, they can probably make other cross-sections prefer it, too! Which wouldn't be a problem if we wanted cross-sections to prefer to be inside an MBLS more than we wanted cross-sections to not be inside an MBLS, but that's just not the way we are.

We'll cooperate with those other cross-sections, but not to the exclusion of our preferences. By lumping us all together as the 4D branching tree-blob Guy Srinivasan, you do us all (and most importantly members of this coalition) a disservice.

Sincerely, A Coalition of Correlated 3D Cross-Sections of Guy Srinivasan

Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,

We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you'll agree that we had no choice but to decide as we did.

Regrettably, Omega Corporation


Dear Omega Corporation,

We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.

Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?

Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan

Quite. That Omega Corporation is closer to Friendly than is Clippy, but if it misses, it misses, and future me is tiled with things I don't want (even if future me does) rather than things I want.

If I want MBLSing but don't know it due to computational problems now, then it's fine. I think that's coherent but defining computational without allowing "my" current "preferences" to change... okay, since I don't know how to do that, I have nothing but intuition as a reason to think it's coherent.

I think this is a good point, but I have a small nit to pick:

So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma!

There cannot be a prisoner's dilemma because your future self has no possible way of screwing your past self.

By way of example, if I were to go out today and spend all of my money on the proverbial hookers and blow, I would be having a good time at the expense of my future self, but there is no way my future self could get back at me.

So it's not so much a matter of cooperation as a matter of pure unmitigated altruism. I've thought about this issue and it seems to me that evolution has provided people (well, most people) with the feeling (possibly an illusion) that our future selves matter. That these "3D agents" are all essentially the same person.

My past self had preferences about what the future looks like, and by refusing to respect them I can defect.

Edit: It's pretty hard to create true short-term prisoner's dilemma situations, since usually neither party gets to see the other's choice before choosing.

My past self had preferences about what the future looks like, and by refusing to respect them I can defect.

It seems to me your past self is long gone and doesn't care anymore. Except insofar as your past self feels a sense of identity with your future self. Which is exactly my point.

Your past self can easily cause physical or financial harm to your future self. But the reverse isn't true. Your future self can harm your past self only if one postulates that your past self actually feels a sense of identity with your future self.

I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he's looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self's] preference ordering than a different world I could have caused to exist.

My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.

I think I see your point, but let me ask you this: Do you think that today in 2010 it's possible to harm Isaac Newton? What would you do right now to harm Isaac Newton and how exactly would that harm manifest itself?

Very probably. I don't know what I'd do because I don't know what his preferences were. Although... a quick Google search reveals this quote:

To me there has never been a higher source of earthly honor or distinction than that connected with advances in science.

I find it likely, then, that he preferred us not to obstruct advances in science in 2010 than for us to obstruct advances in science in 2010. I don't know how much more, maybe it's attenuated a lot compared to the strength of lots of his other preferences.

The harm would manifest itself as a higher measure of 2010 worlds in which science is obstructed, which is something (I think) Newton opposed.

(Or, if you like, my time-travel-causing e.g. 1700 to be the sort of world which deterministically produces more science-obstructed-2010s than the 1700 I could have caused.)

Ok, so you are saying that one can harm Isaac Newton today by going out and obstructing the advance of science?

Yep. I'll bite that bullet until shown a good reason I should not.

I suppose that's the nub of the disagreement. I don't believe it's possible to do anything in 2010 to harm Isaac Newton.

Is this a disagreement about metaphysics, or about how best to define the word 'harm'?

A little bit of both, I suppose. One needs to define "harm" in a way which is true to the spirit of the prisoner's dilemma. The underlying question is whether one can set up a prisoner's dilemma between a past version of the self and a future version of the self.

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.

(I'm not trying to make a reversed slippery slope argument, I just think it's worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life---both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learned helplessness without any clear way out (while friends or society model you as having magical free will but as failing to exercise it as a form of defecting against them), or, particularly devastating for the male half of the population, just the average scenario of being born with average looks and average intelligence.

And anyway, how often do humans actually interact with accurate models of each other, rather than with hastily drawn models of each other that are produced by some combination of wishful thinking and implicit and constant worries about evolutionary game theoretic equilibria? And because our self-image is a reflection of those myriad interactions between ourselves and others or society, how good of a model do we have of ourselves, even when we're not under any obvious unwanted social pressures? Are these interactions much deeper than those that can be constructed and thus more deeply understood within our own minds when we're free from the constant threats and expectations of persons or society? Do humans generally understand their personal friends and enemies and lovers much better than the friends and enemies and lovers they lazily watch on TV screens? Taken in combination, what do the answers to these questions imply, if not for some people then for others?)

I can't help thinking of the great Red Dwarf novel "Better Than Life", whose concept is almost identical (see http://en.wikipedia.org/wiki/Better_Than_Life ). There are few key differences though: in the book, so-called "game heads" waste away in the real world like heroin addicts. Also, the game malfunctions due to one character's self-loathing. Recommended read.

It's true, but it's a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn't have to work, the vast majority of them would not spend 14 hours a day in front of the TV.

Yes, people come home after work and watch television, but if they didn't have to work, the vast majority of them would not spend 14 hours a day in front of the TV.

Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need - from food and shelter to self actualization, I think you'd have quite a few people who'd do nothing but sit in front of the TV.

Um... if a rock was capable of fulfilling my every need, including a need for interaction with real people, I'd probably spend a lot of time around that rock.

Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you're interacting with real people), then wouldn't it fulfill your every need?

I have a need to interact with real people, not to think I'm interacting with real people.

Related: what different conceptions of 'simulation' are we using that make Eliezer's statement coherent to him, but incoherent to me? Possible conceptions in order of increasing 'reality':

(i) the simulation just stimulates your 'have been interacting with people' neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.

(ii) the simulation simulates interaction with people, so that you feel as though you've interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so

(iii) the simulation simulates real people -- so that you really have interacted with "real people", just you've done so inside the simulation

(iv) reality is a simulation -- depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one

ii is a problem, iii fits my values but may violate other sentients' rights, and as for iv, I see no difference between the concepts of "computer program" and "universe" except that a computer program has an output.

So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.

At a certain point of complexity, the equation becomes impossible except by a "god".

Now, if an AI passes THAT Turing Test, I will consider it a real person.

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

I think it'd be useful to hear an example of "observing reality without making judgements" and "observing reality with making judgements". I'm having trouble figuring out what you believe the difference to be.

from food and shelter to self actualization

Assuming it can provide self-actualization is pretty much assuming the contended issue away.

TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.

In my experience most people don't seem to worry about themselves getting emotionally young, it's mostly far-view think-of-the-children stuff. And I'm pretty sure pleasure is a good thing, so I'm not sure in what sense they're "trading" it (unless you mean they could be having more fun elsewhere?)

I'm not sure why it would be hard to understand that I might care about things outside the simulator.

If I discovered that we were a simulation is a larger universe, I would care about what's happening there. (that is I already care, I just don't know what about.)

I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course - removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don't see how an obviously broken box is that helpful an intuition pump.

For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one's choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one's own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”).

And I'd also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I'd want to make sure they were either prevented or at least that people's simulators were safely evacuated. If we could build the kinds of simulators I'm talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on.

Another argument against experience machines is sometimes that they wouldn't be as "challenging" as the real world because you'd be in a “merely man made” world... but the proper response is simply to augment the machine so that it offers more challenges and more meaningful challenges than mere reality - for example, the environments you could call up to give you arbitrary levels of challenge might be calibrated to be "slightly beyond your abilities about 50% of the time but always educational and fun".

Spending time in one of these improved experience machines would be way better than, say, spending the equivalent time in college, because mere college graduates would pale in comparison to people who'd spent the same four years gaining subjective centuries of hands on experience dealing with issues whose "challenge modes" were vastly more complex puzzles than most of the learning opportunities on our boring planet. Even for equivalent subjective time, I think the experience machines would be better, because they'd be calibrated precisely to the person with no worries about educational economies of scale... instead of lectures, conversations... instead of case studies, simulations... and so on.

The only intelligible arguments against the original "straw man" experience machine (though perhaps there are others I'm not clever enough to notice) that remain compelling to me after repairing the design of the machine, are focused on social relationships.

First, one of the greatest challenges in the human environment is other humans. If you're setting up an experience machine scenario with a sliding scale of challenge, where to you get the characters from? Do you just "fabricate" the facade of a someone who presents a particular kind of coordination challenge due to their difficult personal quirks? If you're going to simulate conflict, do you just "fabricate" enemies? And hurt them? Where do all these people come from and what is the moral significance of their existence? Not being distressed by this is probably a character defect, but the alternative seems to involve inevitable distress.

And then on the other side of the coin, there are many people who I love as friends or family, even though they are not physically gorgeous, fully self actualized, passionately moral, polymath "greek gods". Which is probably a lucky thing, because neither am I :-P

But if they refused to enter repaired experience machines (networked, of course, so we could hang out anytime we wanted) the only way I could interact with them would be through an avatar in the substrate world where they were plodding along without the same growth opportunities. Would I eventually see them as grossly incapacitated caricatures of what humans are truly capable of? How much distress would that cause? Or suppose they opted in and then got vastly more out of their experience machine than I got out of mine? Would I feel inferior? Would I need to be protected from the awareness of my inferiority for my own good? Would they feel sorry for me? Would they need to be protected from my disappointing-ness? Would we all just drift apart, putting "facade interfaces" between each other, so everyone's understanding of other people drifted farther and farther out of calibration - me appearing better than actual to them and them worse than actual to me?

And then if something in the external universe supporting our experience machines posed a real challenge that involves actual choices we're back to the political challenges around coordinating with other people where the stakes are authentic and substantial. We'd probably debate from inside the experience boxes about what the world manipulation machines should do, and the arguments would inevitably carry some measure of distress for any "losing factions".

It is precisely the existence of morally significant "non-me entities" that creates challenges that I don't see how to avoid under any variety of experience machine. It's not that I particularly care whether my desk is real or not - its that I care that my family is real.

Given the state of human technology, one could argue that human civilization (especially in the developed world, and hopefully for everyone within a few decades) is already in something reasonably close to an optimal experience machine. We have video games. We have reasonable material comfort. We have raw NASA data online. We can cross our fingers and somewhat reasonably imagine technology improving medical care to cure death and stupidity... But the thing we may never have a solution to is the existence of people we care about, who are not exactly as they would be if their primary concern was our own happiness, while recognizing we are constrained in similar ways, especially when we care about multiple people who want different things for us.

Perhaps this is where we cue Sartre's version of a "three body problem"?

Unless... what if much of the challenges in politics and social interactions happen because people in general are so defective? If my blindnesses and failures compound against those of others, it sounds like a recipe for unhappiness to me. But if experience machines could really help us to become more the kind of people we wanted to be, perhaps other people would be less hellish after we got the hang of self improvement?

I like this comment, however I think this is technically false:

I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own.

I think most people don't have an opinion about this, and don't know what "substrate" means. But then, "most people" is a bit hard to nail down in common usage.

I think it's useful to quantify over "people who know what the question would mean" in most cases.

Thinking through some test cases, I think you're probably right.

I think you missed the bit where the machine gives you a version of your life that's provably the best you could experience. If that includes NASA and vast libraries then you get those.

I think in the absence of actual experience machines, we're dealing with fictional evidence. Statements about what people would hypothetically do have no consequences other than signalling. Once we create them (as we have on a smaller scale with certain electronic diversions), we can observe the revealed preferences.

Yes, but if we still insist on thinking about this, perhaps it would help to keep Hanson's near-far distinction in mind. There are techniques to encourage near mode thinking. For example, trying to fix plot holes in the above scenario.

I can't help but worry there's something we're just not getting.

Any younger.

At the moment where I have the choice to enter the Matrix I weight the costs and benefits of doing so. If the cost of, say, not contributing to the improvement of humankind is worse than the benefit of the hedonistic pleasure I'll receive then it is entirely rational to not enter the Matrix. If I were to enter the Matrix then I may believe that I've helped improve humanity, but at the moment where I'm making the choice, that fact weighs only on the hedonistic benefit side of the equation. The cost of not bettering humanity remains spite of any possible future delusions I may hold.

It seems to me that the real problem with this kind of "advanced wireheading" is that while everything may be just great inside the simulation, you're still vulnerable to interference from the outside world (eg the simulation being shut down for political or religious reasons, enemies from the outside world trying to get revenge, relatives trying to communicate with you, etc). I don't think you can just assume this problem away, either (at least not in a psychologically convincing way).

Put yourself in the least convenient possible world. Does your objection still hold water? In other words, the argument is over whether or not we value pure hedonic pleasure, not whether it's a feasible thing to implement.

It seems the reason why we have the values we do is because we don't live in the least (or in this case most) convenient possible world.

In other words, imagine that you're stuck on some empty planet in the middle of a huge volume of known-life-free space. In this case a pleasant virtual world probably sounds like a much better deal. Even then you still have to worry about asteroids and supernovas and whatnot.

My point is that I'm not convinced that people's objection to wireheading is genuinely because of a fundamental preference for the "real" world (even at enormous hedonic cost), rather than because of inescapable practical concerns and their associated feelings.


A related question might be, how bad would the real world have to be before you'd prefer the matrix? If you'd prefer to "advanced wirehead" over a lifetime of torture, then clearly you're thinking about cost-benefit trade-offs, not some preference for the real-world that overrides everything else. In that case, a rejection of advanced wireheading may simply reflect a failure to imagine just how good it could be.

People usually seem so intent on thinking up reasons why it might not be so great, that I'm having a really hard time getting a read on what folks think of the core premise.

My life/corner of the world is what I think most people would call very good, but I'd pick the Matrix in a heartbeat. But note that I am taking the Matrix at face value, rather than wondering whether it's a trick of advertising. I can't even begin to imagine myself objecting to a happy, low-stress Matrix.

I agree - I think the original post is accurate in what people would respond to the suggestion, in abstract, but the actual implementation would undoubtedly hook vast swathes of the population. We live in a world where people already become addicted to vastly inferior simulations such as WoW already.

I disagree. I think that even the average long-term tortured prisoner would balk and resist if you walked up to him with this machine. In fact, I think fewer people would accept in real life than those who claim they would, in conversations like these.

The resistance may in fact reveal an inability to properly conceptualize the machine working, or it may not. As others have said, maybe you don't want to do something you think is wrong (like abandoning your relatives or being unproductive) even if later you're guaranteed to forget all about it and live in bliss. What if the machine ran on tortured animals? Or tortured humans that you don't know? That shouldn't bother you any more than if it didn't, if all that matters is how you feel once you're hooked up.

We have some present-day corrolaries. What about a lobotomy, or suicide? Even if these can be shown to be a guaranteed escape from unhappiness or neuroses, most people aren't interested, including some really unhappy people.

I think that even the average long-term tortured prisoner would balk and resist if you walked up to him with this machine.

I think the average long-term tortured prisoner would be desperate for any option that's not "get tortured more", considering that real torture victims will confess to crimes that carry the death penalty if they think this will make the torturer stop. Or, for that matter, crimes that carry the torture penalty, IIRC.

Yes, I agree that while not the first objection a person makes, this could be close to the 'true rejection'. Simulated happiness is fine -- unless it isn't really stable and dependable (because it wasn't real) and you're crudely awoken to discover the whole world has gone to pot and you've got a lot of work to do. Then you'll regret having wasted time 'feeling good'.

If you'd prefer to "advanced wirehead" over a lifetime of torture, then clearly you're thinking about cost-benefit trade-offs, not some preference for the real-world that overrides everything else.

Whatever your meta-level goals, unless they are "be tortured for the rest of my life," there's simply no way to accomplish them while being tortured indefinitely. That said, suppose you had some neurological condition that caused you to live in constant excrutiating pain, but otherwise in no way incapacitated you - now, you could still accomplish meta-level goals, but you might still prefer the pain-free simulator. I doubt there's anyone who sincerely places zero value on hedons, but no one ever claimed such people existed.

1: Buy Experience Machine 2: Buy nuclear reactor capable of powering said machine for 2x my expected lifetime 3: buy raw materials (nutrients) capable of same 4: launch all out of the solar system at a delta that makes catching me prohibitively energy expensive.

That was my thought too, but I don't think it's what comes to mind when most people imagine the Matrix. And even then, you might feel (irrational?) guilt about the idea of leaving others behind, so it's not quite a "perfect" scenario.

um...family maybe. otherwise the only subjective experience i care about is my own.

Does Omega Corporation cooperate with ClonesRUs? I would be interested in a combination package - adding the 100% TruClone service to the Much-Better-Life-Simulator.