LESSWRONG
LW

Personal Blog

10

Much-Better-Life Simulator™ - Sales Conversation

by XiXiDu
19th Jun 2011
2 min read
49

10

Personal Blog

10

Much-Better-Life Simulator™ - Sales Conversation
14MixedNuts
2Giles
5Pavitra
2MixedNuts
0Giles
2Richard_Kennaway
5MixedNuts
0jhuffman
0DanArmak
1Richard_Kennaway
0DanArmak
4Richard_Kennaway
0XiXiDu
10MixedNuts
2wedrifid
-1CuSithBell
-1wedrifid
0CuSithBell
2XiXiDu
0CuSithBell
-1wedrifid
0CuSithBell
0XiXiDu
1Rain
2XiXiDu
1Rain
0Thomas
0XiXiDu
8Giles
0XiXiDu
0Giles
0Pavitra
6jsteinhardt
0Jonathan_Graehl
6[anonymous]
4benelliott
3Manfred
6benelliott
3Lightwave
0DanielLC
1Lightwave
2DanielLC
2byrnema
1Gedusa
16jimrandomh
0Jonathan_Graehl
3gwern
0kpreid
0Dorikka
New Comment
49 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:54 PM
[-]MixedNuts14y140

Taboo "simulation". Whatever is, is real. Also we're probably already in a simu... I mean, in a subset of the world systematically implementing surface laws different from deeper laws.

Is the problem that the Much-Better Life Simulator only simulates feelings and not their referents? Then Customer should say so. Is it that it involves chatbots rather than other complete people inside? Then Customer should say so. Is it that it loses complexity to cut simulating costs? Then Customer should say so. Is it that trees are allowed to be made of wood, which is allowed to be made of organic molecules, which are allowed to be made out of carbon, which is allowed to be made out of nuclei, which are allowed to be made out of quarks, which are not allowed to be made out of transistors? Then Customer hasn't thought it through.

Reply
[-]Giles14y20

If you're allowed to assign utility to events which you cannot perceive but can understand and anticipate, then you can assign a big negative utility to the "going up a simulation level" event.

EDIT: What Pavitra said. I guess I was thinking of turtles all the way down

Reply
[-]Pavitra14y50

I think that by "going up a simulation-level" you mean "increase your simulation-depth by one", but intuitively to me "up" sounds like a pop rather than a push.

Reply
[-]MixedNuts14y20

Sure you can, but I can't see why you would. Reality is allowed to be made out of atoms, but not out of transistors? Why? (Well, control over the world outside the simulation matters too, but that's mostly solved. Though we do get Lamed Vav deciding to stay on Earth to help people afford the Simulator instead of ascending to Buddhahood themselves! ...hang on, I think I got my religions mixed up.)

Reply
[-]Giles14y00

Well, control over the world outside the simulation matters too, but that's mostly solved

Really? If everyone hooked up to the sim at the same time then you might be right, but our current world seems pretty chaotic.

As to why I'd care about increasing my simulation-depth - essentially my mind runs on hardware which was designed to care about things in the outside world (specifically my reproductive fitness) more than it cares about its own state. I'm free to re-purpose that hardware how I like, but this kind of preference (for me at least) seems to be a kind of hangover from that.

Reply
[-]Richard_Kennaway14y20

Taboo "simulation". Whatever is, is real. Also we're probably already in a simu... I mean, in a subset of the world systematically implementing surface laws different from deeper laws.

We're already in a subset of the world systematically implementing surface laws different from deeper laws. All the surface laws we see around us are implemented by atoms. (And subatomic particles, and fields, but "it's all made of atoms" is Feynman's way of summing up the idea.)

Where this differs from "simulations" is that there are no sentient beings at the atomic level, telling the atoms how to move to simulate us. This I think, is the issue with "simulations" -- at least, it's my issue. There is another world outside. If there is another world, I'd rather be out (however many levels of "out" it takes) than in. "In" is an ersatz, a fake experience under the absolute control of other people.

Reply
[-]MixedNuts14y50

Nah, our surface rules ain't systematic. I made a laser.

Agree direct puppet-style control is icky. Disagree that is what makes simulations simulatey, or that our own universe is a puppet-theatre-style simulation. If the Matrix masters were constantly deciding "let's add obsidian to this character's inventory", we would be "under the absolute control of other people", but instead they described physical laws and initial conditions and let the simulation unfold without intervention. I'm not particularly icked by that - the universe has to come from somewhere.

Reply
[-]jhuffman14y00

Many people now are happy to see the Hand of Whatever involved in our daily lives. I am not sure that is a problem with hypothetical simulations.

Reply
[-]DanArmak14y00

Would you still prefer to be "out" if you expected your life outside to be much, much worse than your life inside? Would you accept a lifetime of suffering to gain an iota of control over the "real" outside world?

(This is the reversal of the MBLS. Also, apologies for coming late to the discussion.)

Reply
[-]Richard_Kennaway14y10

Would you still prefer to be "out" if you expected your life outside to be much, much worse than your life inside?

I would prefer to act to make my life outside better.

Scaling the imaginary situation back to everyday matters, your question is like responding to the statement "I'm taking a holiday in Venice" with "but suppose you hate it when you get there?" Or responding to "I'm starting a new business" with "but suppose it fails?" Or responding to "I'm going out with someone new" with "but suppose she's a serial killer?"

All you have done is imagine the scenario ending in failure. Why?

Reply
[-]DanArmak14y00

All you have done is imagine the scenario ending in failure. Why?

Because I'm building it to parallel the original question of whether you'd want to go into an MBLS. In both cases, your potential future life in the simulated or "inside" world is assumed to be much better than the one you might have in the simulating "outside" world. If you give different answers (inside vs. outside) in the two cases, why?

You said:

There is another world outside. If there is another world, I'd rather be out

As a reason for not entering the MBLS. Would that reason also make you want to escape from our current world to a much more dismal life in the simulating one? To me that would be a repugnant conclusion and is why I'd prefer a much better life in a simulated world, in both cases.

I would prefer to act to make my life outside better.

An individual's control over their life, in our current world, is far below what I consider acceptable. People are stuck with sick bodies and suffering minds and bad relationships and die in unexpected or painful ways or, ultimately, of unavoidable old age. I would happily trade this for the MBLS experience which would surely offer much greater control.

Do you attach intrinsic value to affecting (even if weakly) the true ultimate level of reality, or do you disagree with my preference for a different reason? If the former, how would you deal with not knowing if we're simulated, or infinite recursions of simulation, or scenarios where infinite numbers of worlds are simulated and simulate others? Would it mean you give high priority to discovering if we're in a simulation and, if so, breaking out - at the expense of efforts to optimize our life in this world?

Reply
[-]Richard_Kennaway14y40

You said:

There is another world outside. If there is another world, I'd rather be out

As a reason for not entering the MBLS. Would that reason also make you want to escape from our current world to a much more dismal life in the simulating one? To me that would be a repugnant conclusion and is why I'd prefer a much better life in a simulated world, in both cases.

Both scenarios involve the scenario-setter putting their hand on one side of the scales and pushing hard enough to sway my preferences. You might as well ask if I would torture babies for a sufficiently high incentive. These questions are without significance. Ask me again when we actually have uploads and simulations. Meanwhile, strongly rigged scenarios can always beat strong hypothetical preferences, and vice versa. It just becomes a contest over who can name the biggest number.

how would you deal with not knowing if we're simulated, or infinite recursions of simulation, or scenarios where infinite numbers of worlds are simulated and simulate others?

I don't take such speculations seriously. I've read the arguments for why we're probably living in a simulation and am unimpressed; I am certainly not going to be mugged à la Pascal into spending any substantial effort considering the matter.

Reply
[-]XiXiDu14y00

I want to rephrase my last comment:

Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio. One unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. If it turns out that one of its complex values can be effectively realized and optimized, it might turn out to outweigh all other values. This can only be countered by changing one's utility function and reassign utility in such a way as to outweigh that effect, which will lead to inconsistency, or by discounting the value that threatens to outweigh all others, which will again lead to inconsistency.

Reply
[-]MixedNuts14y100

Can't your utility function look like "number of paperclips times number of funny jokes" rather than a linear combination? Then situations where you accept very little humor in exchange for loads of paperclips are much rarer.

Relevant intuition: this trade-off makes me feel sad, so it can't be what I really want. And I hear it's proven that wanting can only work if it involves maximizing a function over the state of the universe.

Reply
[-]wedrifid14y20

Utility maximization destroys complex values

No, it doesn't. A utility function can be as complex as you want it to be. In fact it can be more complex than is possible to represent in universe.

Reply
[-]CuSithBell14y-10

For this reason, I almost wish LW would stop talking about utility functions entirely.

Reply
[-]wedrifid14y-10

For this reason, I almost wish LW would stop talking about utility functions entirely.

That it is theoretically possible for functions to be arbitrarily complex does not seem to be a good reason to reject using a specific kind of function. Most information representation formats can be arbitrary complex. That's what they do.

(This is to say that while I respect your preference for not talking about utility functions your actual reasons are probably better than because utility functions can be arbitrarily complex.)

Reply
[-]CuSithBell14y00

Right, sorry. The reason I meant was something like "utlity functions can be arbitrarily complex and in practice are extremely complex, but this is frequently ignored", what with talk about "what utility do you assign to a firm handshake" or the like.

Edit: And while they have useful mathematical features in the abstract, they seem to become prohibitively complex when modeling the preferences of things like humans.

Reply
[-]XiXiDu14y20

...what with talk about "what utility do you assign to a firm handshake" or the like.

World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn't this a dramatic oversimplification? I don't see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn't feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don't.

Reply
[-]CuSithBell14y00

World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn't this a dramatic oversimplification?

So far as I can tell, you have it backwards - those sorts of functions form a subset of the set of utility functions.

The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above "maximize one value" or "tile the universe with 'like' buttons". They're characterized by "Handshake = (5*firmness_quotient) UTILS" or "Slice of Cheesecake = 32 UTILS" or what have you.

I'm sure it's possible to discuss utility functions without falling into these traps, but I don't think we do that, except in the vaguest cases.

Reply
[-]wedrifid14y-10

Ick. Yes. That question makes (almost) no sense.

There are very few instances in which I would ask "what utility do you assign?" regarding a concrete, non-contrived good. I tend to consider utility preferences between possible world states that could arise depending on a specific decision or event and then only consider actual numbers if actually necessary for the purpose of multiplying.

I would certainly prefer to limit use of the term to those who actually understand what it means!

Reply
[-]CuSithBell14y00

There are very few instances in which I would ask "what utility do you assign?" regarding a concrete, non-contrived good.

Exactly. Perhaps if we used a different model (or an explicitly spelled-out simplified subset of the utility functions) we could talk about such things.

I would certainly prefer to limit use of the term to those who actually understand what it means!

Inconceivable!

Reply
[-]XiXiDu14y00

But if you do not "assign utility" and only consider world states, how do you deal with novel discoveries? How does a hunter gatherer integrate category theory into its utility function? I mean, you have to somehow weigh new items?

Reply
[-]Rain14y10

I just go ahead and assign value directly to "novelty" and "variety".

Reply
[-]XiXiDu14y20

I just go ahead and assign value directly to "novelty" and "variety".

Isn't that too unspecific? Every sequence of digits of the variety of transcendental numbers can be transcribed into musical scores. Or you could use cellular automata to create endless amounts of novel music. But that is not what you mean. If I asked you for a concrete example you could only tell me something that you already expect but are not sure of, which isn't really novel, or say that you will be able to point out novelty in retrospect. But even with the latter answer there is a fundamental problem, because novelty can't be crowned in retrospect if you are able to recognize it. In other words, it is predictable what will excite you and make you label something n-o-v-e-l. In this respect what you call "novelty" is just like the creation of music by the computation of the sequences of transcendental numbers, uncertain but ultimately computable. My point, to assign value to "novelty" and "variety" can not replace the assignment of utility to discrete sequences that make interesting music. You have to weigh discrete items, because those that are sufficiently described by "novelty" and "variety" are just random noise.

Reply
[-]Rain14y10

You have to weigh discrete items, because those that are sufficiently described by "novelty" and "variety" are just random noise.

Continuous random noise is quite monotonous to experience - the opposite of varied. I didn't say that variety and novelty were my only values, just that I assign value to them. I value good music, too, as well as food and other pleasant stimuli. The theory of diminishing returns comes into play, often caused by the facility of the human mind to attain boredom. I view this as a value continuum rather than a set value.

In my mind, I'm picturing one of those bar graphs that show up when music is playing, except instead of music, it's my mind and body moving throughout the day, and each bar represents my value of particular things in the world, with new bars added and old ones dying off, and... well, it's way more complex than, "assign value K to music notes XYZ and call it done." And several times I've been rebuked for using the phrase "assign value to something", as opposed to "discover value as already-implemented by my brain".

Reply
[-]Thomas14y00

Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio.

Does not follow necessary. A larger plethora of values can be the greatest utility.

I don't say that it must always be so. But it can be constructed that way.

Reply
[-]XiXiDu14y00

(Note that I myself do not subscribe to wireheading, I am merely trying to fathom the possible thought processes of those that subscribe to it.)

You are right. But the basic point is that if you are human and subscribe to rational, consistent, unbounded utility maximization, then you assign at least non-negligible utility to unconditional bodily sensations. If you further accept uploading and that emulations can experience more in a shorter period of time compared to fleshly humans, then it is a serious possibility that you can outweigh the extra utility you assign to the referents of rewards in the form of bodily sensations and other differences like chatbots instead of real agents (a fact that you can choose to forget).

I believe the gist of the matter to be that wireheading appears to its proponents to be the rational choice for an utility maximizing agent which is the effect of biological evolution within our universe. For what it's worth, this could be an explanation for the Fermi paradox.

Reply
[-]Giles14y80

Customer: I'm pretty sure the marginal utility of fiction diminishes once a significant portion of my life is taken up by fiction.

Reply
[-]XiXiDu14y00

Customer: I'm pretty sure the marginal utility of fiction diminishes once a significant portion of my life is taken up by fiction.

Then that is also the solution to infinite ethics, that we should be scope insensitive to even larger amounts of the same if we already devote a significant portion of our life's to it? And what do you mean by 'diminishes', are you saying that we should apply discounting?

Reply
[-]Giles14y00

I don't know. The utility function measures outputs rather than inputs; the fiction case is confusing because the two are closely correlated (i.e. how much awesome fiction I consume is correlated with how much time I spend consuming awesome fiction).

For your solution to make sense, we'd need some definition of "time devoted to a particular cause" that we can then manage in our utility function. For example, if parts of your brain are contemplating some ethical problem while you're busy doing something else, does that count as time devoted?

It seems doable though. I don't think it's the solution to infinite ethics but it seems like you could conceive of an agent behaving that way while still being considered rational and altruistic.

Reply
[-]Pavitra14y00

If you can increase the intensity of the awesomeness of the fiction, without increasing the duration I spend there, I certainly have no objections. Similarly, if you can give an awesomizing overlay to my productive activity, without interfering with that productivity, then again I have no objections.

My objection to the simulator is that it takes away from my productive work. It's not that I stop caring about fiction, it's that I keep caring about reality.

Even if I accept that living in the simulator is genuinely good and worthwhile... what am I doing sitting around in the sim when I could be out there getting everyone else to sign up? Actually using the simulator creates only one person-hour of sim-time per hour; surely I can get better leverage than that through a little well-placed evangelism.

Reply
[-]jsteinhardt14y60

You place utility on entire universe histories, not just isolated events. So I can place 0 utility on all universe histories where I end up living in a simulation, and will always reject the salesgirl's offer.

Reply
[-]Jonathan_Graehl14y00

You place utility on entire universe histories

That does seem like the most I can imagine my preferences depending on :)

I generally agree.

Reply
[-][anonymous]14y60

Man, the marketing department of Wireheading Inc. really has a tough job on their hands. Maybe they should just change their vocabulary, make a Facebook app instead and just wait for people to rationalize their choices and join anyway.

Reply
[-]benelliott14y40

Hmm, I think I just increased my credence of the master slave model. It explains the customer's reaction perfectly.

Reply
[-]Manfred14y30

On fictional evidence?

Reply
[-]benelliott14y60

I was wondering if somebody would catch that.

To be more precise, I updated on the fact that my own reactions were perfectly aligned with the customer's.

Reply
[-]Lightwave14y30

To be honest, if a simulation is as rich and complex (in terms of experiences/environment/universe) as the "real world" and (maybe?) also has some added benefit (e.g. more subjective years inside, is "cheaper"), then I can imagine myself jumping in and staying there forever (or for as long as possible).

What's the difference between "real" reality and a simulated one anyway, if all my experiences are going to be identical? I think our intuitions regarding not wanting to be in a simulated world are based on some evolutionary optimization, which no longer applies in a world of uploads and should be done away with.

Reply
[-]DanielLC14y00

If all you value is your own experiences, then this would be just as good. You may value other things. For example, I value other people's experiences, and I wouldn't care about happy-looking NPCs. I'd be happier in that simulator, but I'd choose against it, because other things are important.

Reply
[-]Lightwave14y10

Other people could join in the simulation as well. Also, new people could be created, what's the difference between being born in the "real world" and the simulated one? So they would be real people. It's not fair to call them just "NPCs".

Reply
[-]DanielLC14y20

Also, new people could be created, what's the difference between being born in the "real world" and the simulated one?

If the simulation is sufficiently accurate to generate qualia, they're real people. If it's only sufficiently accurate to convince me that they're real, they're not. I agree that you can make a simulation that actually has people in it, but the point is that you can also make a simulation that makes me think my desires are fulfilled without actually fulfilling them. I have no desire to be so fooled.

Reply
[-]byrnema14y20

A suggestion: I feel like the story focuses too much on 'feelings' (e.g., "if all desirable bodily sensations a human body and brain is capable of experiencing") which people discount a lot and have trained themselves to not optimize in favor of things that are more satisfying. (Taking a bath and eating cake would yield more immediate physical, pleasurable sensations than writing this comment but I know I'll find this more satisfying. .. I'll slice some cake in a minute.) Ah -- this was better said in LukeProg's recent post Not For the Sake Of Pleasure Alone.

It would more convincing to appeal to the stronger, concrete desires of people...

Sales girl: Don't you want to know how the world works? Your simulated brain can read and process 100 books a day and invent the equivalent of a PhD thesis on any subject just by directing your attention. When you leave the simulation, you'll need to leave your knowledge behind, but you can return to it at anytime.

I wonder about the last sentence I felt compelled to add. Why can't we come and go from the simulator? Then wouldn't it be a no-brainer, to choose spend something like 10 minutes of every hour there? (It would make pleasant experiences more efficient, yielding more time for work.)

Someone else's turn: what else can be done in the simulator that would be most irresistible?

Reply
[-]Gedusa14y10

The obvious extra question is:

"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.

Reply
[-]jimrandomh14y160

The obvious extra question is:

"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.

In the least convenient world, the answer is: "I can't afford it until I make enough money by working in sales." Or alternatively, "I have a rare genetic defect which makes the machine not work for me."

Reply
[-]Jonathan_Graehl14y00

Well done. But parent comment is still clever and amusing, if useless.

Reply
[-]gwern13y30

Obviously in this grim future dystopia, sales has been taken over by tireless machines!

Reply
[-]kpreid14y00

She wishes to make sure everyone has the opportunity to enjoy...oh, right.

Reply
[-]Dorikka14y00

Sales girl: We accounted for that as well! Let me ask you how much utility you assign to one hour of ultimate well-being™, where 'ultimate' means the best possible satisfaction of all desirable bodily sensations a human body and brain is capable of experiencing?

My entire life in your simulator might be of less utility than my life outside of the simulator because if, say, I was roughly utilitarian, the more moderate positive effect that my efforts had on the preference functions of a whole lot of people would be worth more than a huge increase that the simulator would have on my own preference function.

In all honesty, however, I would be really tempted. I'm also pretty sure that I wouldn't have akrasia problems after turning the offer down. Curious how a counterfactual can have such an affect on your outlook, no? Perhaps there's a way to take advantage of that.

Reply
Moderation Log
Curated and popular this week
49Comments

Related to: A Much Better Life?

Reply to: Why No Wireheading?

The Sales Conversation

Sales girl: Our Much-Better-Life Simulator™ is going to provide the most enjoyable life you could ever experience.

Customer: But it is a simulation, it is fake. I want the real thing, I want to live my real life.

Sales girl: We accounted for all possibilities and determined that the expected utility of your life outside of our Much-Better-Life Simulator™ is dramatically lower.

Customer: You don't know what I value and you can't make me value what I don't want. I told you that I value reality over fiction.

Sales girl: We accounted for that as well! Let me ask you how much utility you assign to one hour of ultimate well-being™, where 'ultimate' means the best possible satisfaction of all desirable bodily sensations a human body and brain is capable of experiencing?

Customer: Hmm, that's a tough question. I am not sure how to assign a certain amount of utility to it.

Sales girl: You say that you value reality more than what you call 'fiction'. But you nonetheless value fiction, right?

Customer: Yes of course, I love fiction. I read science fiction books and watch movies like most humans do.

Sales girl: Then how much more would you value one hour of ultimate well-being™ by other means compared to one hour of ultimate well-being™ that is the result of our Much-Better-Life Simulator™?

Customer: If you ask me like that, I would exchange ten hours in your simulator with one hour of real satisfaction, something that is the result of an actual achievement rather than your fake.

Sales girl: Thank you. Would you agree if I said that for you one hour outside, that is 10 times less satisfying, roughly equals one hour in our simulator?

Customer: Yes, for sure.

Sales girl: Then you should buy our product. Not only is it very unlikely for you to experience even a tenth of ultimate well-being™ that we offer more than a few times per year, but our simulator delivers and allows your brain to experience 20 times more perceptual data than you would be able to experience outside of our simulator. All this at a constant rate while experiencing ultimate well-being™. And we offer free upgrades that are expected to deliver exponential speed-ups and qualitative improvements for the next few decades.

Customer: Thanks, but no thanks. I rather enjoy the real thing.

Sales girl: But I showed you that our product easily outweighs the additional amount of utility you expected to experience outside of our simulator.

Customer: You just tricked me into this utility thing, I don't want to buy your product. Please leave me alone now.