Related: Not for the Sake of Happiness (Alone), Value is Fragile, Fake Fake Utility Functions, You cannot be mistaken about (not) wanting to wirehead, Utilons vs. Hedons, Are wireheads happy?

When someone tells me that all human action is motivated by the desire for pleasure, or that we can solve the Friendly AI problem by programming a machine superintelligence to maximize pleasure, I use a two-step argument to persuade them that things are more complicated than that.

First, I present them with a variation on Nozick's experience machine,1 something like this:

Suppose that an advanced team of neuroscientists and computer scientists could hook your brain up to a machine that gave you maximal, beyond-orgasmic pleasure for the rest of an abnormally long life. Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with. Would you let them do this for you?

Most people say they wouldn't choose the pleasure machine. They begin to realize that even though they usually experience pleasure when they get what they desired, they want more than just pleasure. They also want to visit Costa Rica and have good sex and help their loved ones succeed.

But we can be mistaken when inferring our desires from such intuitions, so I follow this up with some neuroscience.


Wanting and liking

It turns out that the neural pathways for 'wanting' and 'liking' are separate, but overlap quite a bit. This explains why we usually experience pleasure when we get what we want, and thus are tempted to think that all we desire is pleasure. It also explains why we sometimes don't experience pleasure when we get what we want, and why we wouldn't plug in to the pleasure machine.

How do we know this? We now have objective measures of wanting and liking (desire and pleasure), and these processes do not always occur together.

liking expressionsOne objective measure of liking is 'liking expressions.' Human infants, primates, and rats exhibit homologous facial reactions to pleasant and unpleasant tastes.2 For example, both rats and human infants display rhythmic lip-licking movements when presented with sugary water, and both rats and human infants display a gaping reaction and mouth-wipes when presented with bitter water.3

Moreover, these animal liking expressions change in ways analogous to changes in human subjective pleasure. Food is more pleasurable to us when we are hungry, and sweet tastes elicit more liking expressions in rats when they are hungry than when they are full.4 Similarly, both rats and humans respond to intense doses of salt (more concentrated than in seawater) with mouth gapes and other aversive reactions, and humans report subjective displeasure. But if humans or rats are depleted of salt, both humans and rats react instead with liking expressions (lip-licking), and humans report subjective pleasure.5

Luckily, these liking and disliking expressions share a common evolutionary history, and use the same brain structures in rats, primates, and humans. Thus, fMRI scans have uncovered to some degree the neural correlates of pleasure, giving us another objective measure of pleasure.6

As for wanting, research has revealed that dopamine is necessary for wanting but not for liking, and that dopamine largely causes wanting.7

Now we are ready to explain how we know that we do not desire pleasure alone.

First, one can experience pleasure even if dopamine-generating structures have been destroyed or depleted.8 Chocolate milk still tastes just as pleasurable despite the severe reduction of dopamine neurons in patients suffering from Parkinson's disease,9 and the pleasure of amphetamine and cocaine persists throughout the use of dopamine-blocking drugs or dietary-induced dopamine depletion — even while these same treatments do suppress the wanting of amphetamine and cocaine.10

Second, elevation of dopamine causes an increase in wanting, but does not cause an increase in liking (when the goal is obtained). For example, mice with raised dopamine levels work harder and resist distractions more (compared to mice with normal dopamine levels) to obtain sweet food rewards, but they don't exhibit stronger liking reactions when they obtain the rewards.11 In humans, drug-induced dopamine increases correlate well with subjective ratings of 'wanting' to take more of the drug, but not with ratings of 'liking' that drug.12 In these cases, it becomes clear that we want some things besides the pleasure that usually results when we get what we want.

Indeed, it appears that mammals can come to want something that they have never before experienced pleasure when getting. In one study,13 researchers observed the neural correlates of wanting while feeding rats intense doses of salt during their very first time in a state of salt-depletion. That is, the rats had never before experienced intense doses of salt as pleasurable (because they had never been salt-depleted before), and yet they wanted salt the very first time they encountered it in a salt-depleted state. 

 

Commingled signals

But why are liking and wanting so commingled that we might confuse the two, or think that the only thing we desire is pleasure? It may be because the two different signals are literally commingled on the same neurons. Resarchers explain:

Multiplexed signals commingle in a manner akin to how wire and optical communication systems carry telephone or computer data signals from multiple telephone conversations, email communications, and internet web traffic over a single wire. Just as the different signals can be resolved at their destination by receivers that decode appropriately, we believe that multiple reward signals [liking, wanting, and learning] can be packed into the activity of single ventral pallidal neurons in much the same way, for potential unpacking downstream.

......we have observed a single neuron to encode all three signals... at various moments or in different ways (Smith et al., 2007; Tindell et al., 2005).14

 

Conclusion

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

 

 

Notes

1 Nozick (1974), pp. 44-45.

2 Steiner (1973); Steiner et al (2001).

3 Grill & Berridge (1985); Grill & Norgren (1978).

4 Berridge (2000).

5 Berridge et al. (1984); Schulkin (1991); Tindell et al. (2006).

6 Berridge (2009).

7 Berridge (2007); Robinson & Berridge (2003).

8 Berridge & Robinson (1998); Berridge et al. (1989); Pecina et al. (1997).

9 Sienkiewicz-Jarosz et al. (2005).

10 Brauer et al. (2001); Brauer & de Wit (1997); Leyton (2009); Leyton et al. (2005).

11 Cagniard et al. (2006); Pecina et al. (2003); Tindell et al. (2005); Wyvell & Berridge (2000).

12 Evans et al. (2006); Leyton et al. (2002).

13 Tindell et al. (2009).

13 Aldridge & Berridge (2009). See Smith et al. (2011) for more recent details on commingling.


References

Aldridge & Berridge (2009). Neural coding of pleasure: 'rose-tinted glasses' of the ventral pallidum. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 62-73). Oxford University Press.

Berridge (2000). Measuring hedonic impact in animals and infants: Microstructure of affective taste reactivity patterns. Neuroscience and Biobehavioral Reviews, 24: 173-198.

Berridge (2007). The debate over dopamine's role in reward: the case for incentive saliencePsychopharmacology, 191: 391-431.

Berridge (2009). ‘Liking’ and ‘wanting’ food rewards: Brain substrates and roles in eating disordersPhysiology & Behavior, 97: 537-550.

Berridge, Flynn, Schulkin, & Grill (1984). Sodium depletion enhances salt palatability in rats. Behavioral Neuroscience, 98: 652-660.

Berridge, Venier, & Robinson (1989). Taste reactivity analysis of 6-hydroxydopamine-induced aphagia: Implications for arousal and anhedonia hypotheses of dopamine function. Behavioral Neuroscience, 103: 36-45.

Berridge & Robinson (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Research Reviews, 28: 309-369.

Brauer, Cramblett, Paxton, & Rose (2001). Haloperidol reduces smoking of both nicotine-containing and denicotinized cigarettes. Psychopharmacology, 159: 31-37.

Brauer & de Wit (1997). High dose pimozide does not block amphetamine-induced euphoria in normal volunteers. Pharmacology Biochemistry & Behavior, 56: 265-272.

Cagniard, Beeler, Britt, McGehee, Marinelli, & Zhuang (2006). Dopamine scales performance in the absence of new learning. Neuron, 51: 541-547.

Evans, Pavese, Lawrence, Tai, Appel, Doder, Brooks, Lees, & Piccini (2006). Compulsive drug use linked to sensitized ventral striatal dopamine transmission. Annals of Neurology, 59: 852-858.

Grill & Berridge (1985). Taste reactivity as a measure of the neural control of palatability. In Epstein & Sprague (eds.), Progress in Psychobiology and Physiological Psychology, Vol 2 (pp. 1-6). Academic Press.

Grill & Norgren (1978). The taste reactivity test II: Mimetic responses to gustatory stimuli in chronic thalamic and chronic decerebrate rats. Brain Research, 143: 263-279.

Leyton, Boileau, Benkelfat, Diksic, Baker, & Dagher (2002). Amphetamine-induced increases in extracellular dopamine, drug wanting, and novelty seeking: a PET/[11C]raclopride study in healthy men. Neuropsychopharmacology, 27: 1027-1035.

Leyton, Casey, Delaney, Kolivakis, & Benkelfat (2005). Cocaine craving, euphoria, and self-administration: a preliminary study of the effect of catecholamine precursor depletion. Behavioral Neuroscience, 119: 1619-1627.

Leyton (2009). The neurobiology of desire: Dopamine and the regulation of mood and motivational states in humans. In Kringelbach & Berridge (eds.), Pleasures of the brain (pp. 222-243). Oxford University Press.

Nozick (1974). Anarchy, State, and Utopia. Basic Books.

Pecina, Berridge, & Parker (1997). Pimozide does not shift palatibility: Separation of anhedonia from sensorimotor suppression by taste reactivity.Pharmacology Biochemistry and Behavior, 58: 801-811.

Pecina, Cagniard, Berridge, Aldridge, & Zhuang (2003). Hyperdopaminergic mutant mice have higher 'wanting' but not 'liking' for sweet rewardsThe Journal of Neuroscience, 23: 9395-9402.

Robinson & Berridge (2003). Addiction. Annual Review of Psychology, 54: 25-53.

Schulkin (1991). Sodium Hunger: the Search for a Salty Taste. Cambridge University Press.

Sienkiewicz-Jarosz, Scinska, Kuran, Ryglewicz, Rogowski, Wrobel, Korkosz, Kukwa, Kostowski, & Bienkowski (2005). Taste responses in patients with Parkinson's diseaseJournal of Neurology, Neurosurgery, & Psychiatry, 76: 40-46.

Smith, Berridge, & Aldridge (2007). Ventral pallidal neurons distinguish 'liking' and 'wanting' elevations caused by opioids versus dopamine in nucleus acumbens. Program No. 310.5, 2007 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience.

Smith, Berridge, & Aldridge (2011). Disentangling pleasure from incentive salience and learning signals in brain reward circuitryProceedings of the National Academy of Sciences PNAS Plus, 108: 1-10.

Steiner (1973). The gustofacial response: Observation on normal and anecephalic newborn infants. Symposium on Oral Sensation and Perception, 4: 254-278.

Steiner, Glaser, Hawillo, & Berridge (2001). Comparative expression of hedonic impact: affective reactions to taste by human infants and other primates.Neuroscience and Biobehavioral Reviews, 25: 53-74.

Tindell, Berridge, Zhang, Pecina, & Aldridge (2005). Ventral pallidal neurons code incentive motivation: Amplification by mesolimbic sensitization and amphetamineEuropean Journal of Neuroscience, 22: 2617-2634.

Tindell, Smith, Pecina, Berridge, & Aldridge (2006). Ventral pallidum firing codes hedonic reward: When a bad taste turns good. Journal of Neurophysiology, 96: 2399-2409.

Tindell, Smith, Berridge, & Aldridge (2009). Dynamic computation of incentive salience: 'wanting' what was never 'liked'. The Journal of Neuroscience, 29: 12220-12228.

Wyvell & Berridge (2000). Intra-accumbens amphetamine increases the conditioned incentive salience of sucrose reward: Enhancement of reward 'wanting' without enhanced 'liking' or response reinforcement. Journal of Neuroscience, 20: 8122-8130.

New Comment
132 comments, sorted by Click to highlight new comments since: Today at 9:47 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Most people say they wouldn't choose the pleasure machine

Possibly because the word "machine" is sneaking in connotations that lead to the observed conclusion: we picture something like a morphine pump, or something perhaps only slightly less primitive.

What if we interpret "machine" to mean "a very large computer running a polis under a fun-theoretically optimal set of rules" and "hook up your brain" to mean "upload"?

2loup-vaillant13y
Then you're talking Friendly AI with the prior restriction that you have to live alone. Many¹ people will still run the "I would be subjected to a machine" cached thought, will still disbelieve that a Machine™ could ever understand our so-complex-it's-holly psyche, that even if it does, it will automatically be horrible, and that the whole concept is absurd anyway. In that case they wouldn't reject the possibility because they don't want to live alone and happy, but because they positively believe FAI is impossible. My solution in that case is just to propose them to live a guaranteed happy life, but alone. For people who still refuse to answer on the grounds of impossibility, invoking the supernatural may help. 1: I derive that "many" from one example alone, but I suspect it extends to most enlightened people who treat philosophy as closer to literature than science (wanting to read the sources, and treating questions like "was Niezche/Kant/Spinoza plain wrong on such point" as ill typed —there's no truths or fallacies, only schools of thought). Michel Onfray appears to say that's typically European.
7tyrsius13y
This machine, if it were to give you maximal pleasure, should be able to make you feel as if you are not alone. The only way I can see this machine actually making good on its promise is to be a Matrix-quality reality engine, but with you in the king seat. I would take it.
0loup-vaillant13y
Of course it would. My question is, to what extent would you mind being alone? Not feeling alone, not even believing you are alone, just being alone. Of course, once I'm plugged in to my Personal Matrix, I would not mind any more, for I wouldn't feel nor believe I am alone. But right now I do mind. Whatever the real reasons behind it, being cut off from the rest of the world just feels wrong. Basically, I believe I want Multiplayer Fun bad enough to sacrifice some Personal Fun. Now, I probably wouldn't want to sacrifice much personal fun, so given the choice between maximum Personal Fun and my present life, (no third alternative allowed), I would probably take the blue pill. Though it would really bother me if everyone else wouldn't be given the same choice. Now to get back on topic, I suspect Luke did want to talk about a primitive system that would turn you into an Orgasmium. Something that would even sacrifice Boredom to maximize subjective pleasure and happiness. (By the way, I suspect that "Eternal Bliss" promised by some beliefs systems is just as primitive.) Such a primitive system would exactly serve his point: do you only want happiness and pleasure? Would you sacrifice everything else to get it?
1tyrsius13y
If this is indeed Luke's intended offer, than I believe it to be a lie. Without the ability to introduced varied pleasure, an Orgasmium would fail to deliver on its promise of "maximal pleasure." For the offer to be true, it would need to be a Personal Matrix.
2jhuffman13y
Some people think that extended periods of euphoria give up no marginal pleasure. I haven't found that to be the case - but perhaps if we take away any sense of time passing then it would work.

Both you and Eliezer seem to be replying to this argument:

  • People only intrinsically desire pleasure.

  • An FAI should maximize whatever people intrinsically desire.

  • Therefore, an FAI should maximize pleasure.

I am convinced that this argument fails for the reasons you cite. But who is making that argument? Is this supposed to be the best argument for hedonistic utilitarianism?

I wonder how much taking these facts into account helps. The error that gets people round up to simplistic goals such as "maximize pleasure" could just be replayed at a more sophisticated level, where they'd say "maximize neural correlates of wanting" or something like that, and move to the next simplest thing that their current understanding of neuroscience doesn't authoritatively forbid.

Sure. And then I write a separate post to deal with that one. :)

There are also more general debunkings of all such 'simple algorithm for friendly ai' proposals, but I think it helps to give very concrete examples of how particular proposed solutions fail.

0poh1ceko13y
It helps insofar as the person's conscious mind lags behind in awareness of the object being maximized.
-3[anonymous]10y
''Moore proposes an alternative theory in which an actual pleasure is already present in the desire for the object and that the desire is then for that object and only indirectly for any pleasure that results from attaining it. "In the first place, plainly, we are not always conscious of expecting pleasure, when we desire a thing. We may only be conscious of the thing which we desire, and may be impelled to make for it at once, without any calculation as to whether it will bring us pleasure or pain. In the second place, even when we do expect pleasure, it can certainly be very rarely pleasure only which we desire.''
0Miller13y
Sounds like a decent methodology to me.
[-][anonymous]13y150

[Most people] begin to realize that even though they usually experience pleasure when they get what they desired, they want more than just pleasure. They also want to visit Costa Rica and have good sex and help their loved ones succeed.

Actually, they claim to also want those other things. Your post strongly implies that this claim is justified, when it seems much more plausible to me to just assume they instead also want a dopamine hit. In other words, to properly wirehead them, Nozick's machine would not just stimulate one part (liking), but two (+wanting) or three (+learning).

So I don't really see the point of this article. If I take it at face-value (FAI won't just optimize the liking response), then it's true, but obvious. However, the subtext seems to be that FAI will have to care about a whole bunch of values and desires (like travels, sex, companionship, status and so on), but that doesn't follow from your arguments.

It seems to me that you are forcing your conclusions in a certain direction that you so far haven't justified. Maybe that's just because this is an early post and you'll still get to the real meat (I assume that is the case and will wait), but I'm uncomfortable with the way it looks right now.

Hey Lukeprog, thanks for your article.

I take it you have read the new book "Pleasures of the Brain" by Kringelbach & Berridge? I've got the book here but haven't yet had the time/urge to read it bookend to bookend. From what I've glimpsed while superficially thumbing through it however, it's basically your article in book-format. Although I believe I remember that they also give "learning" the same attention as they give to liking and wanting, as one of your last quotations hints at.

It's quite a fascinating thought, that the "virtue" of curiosity which some people display is simply because they get a major kick out of learning new things - as I suspect most people here do.


Anyway, I've never quite bought the experience machine argument for two reasons:

1) As we probably all know: what people say they WOULD DO is next to worthless. Look at the hypnotic pull of World of Warcraft. It's easy to resist for me, but there may very well be possible virtual realities, that strike my tastes in such an irresistible manner that my willpower will be powerless and I'd prefer the virtual reality over this one. I may feel quite guilty about it, but it may be so h... (read more)

I don't understand what everything after the Nozick's experience machine scenario is for. That is, the rest of the post doesn't seem to support the idea that we intrinsically value things other than pleasure. It tells us why sometimes what we think will give us pleasure is wrong (wanting more of the drug), and why some things give us pleasure even though we didn't know they would (salt when salt-deprived)... but none of this means that pleasure isn't our objective.

Once we know about salt, we'll want it for the pleasure. We would also normally stop wanting something that turns out not to give us pleasure, as I stopped wanting to eat several chocolate bars at a time. It might not work like this in the drug example, though, that's true; but does this mean that there's something about the drug experience we desire besides the pleasure, or are our brains being "fooled" by the dopamine?

I think the latter. There is no aspect wanted, just the wanting itself.... possibly because dopamine is usually associated with - supposed to signal - pleasure?

(I think once anyone got in your Nozick's machine variation, they would change their minds and not want to get out. We think we'd exper... (read more)

2Friendly-HI13y
"the rest of the post doesn't seem to support the idea that we desire things other than pleasure." I think it does, depending of how you interpret the word "desire". Suppose I'm a smoker who is trying to quit - I don't like smoking at all anymore, I hate it - but I am still driven to do it, because I simply can't resist... it's wanting without liking. So in a sense this example clearly demonstrates, that people are driven by other urges that can be completely divorced from hedonic concerns - which is the whole point of this topic. This seems to be entirely true, so there definitely is an insight to be had here for someone who may have thought otherwise until now. I think the key to this "but does it really make a sound"-misunderstanding resides within the word "desire" . Do I "desire" to smoke when I actually dislike doing it? It depends entirely what you mean by "desire". Because "wanting" and "liking" usually occur simultaneously, some people will interpret the word desire more into the direction of "wanting", while in other people's brains it may be associated more with the concept of "liking". So what are we even talking about here? If I understood your viewpoint correctly, you'd agree with me that doing something we only "want" but don't "like" is a waste of time. We may be hardwired to do it, but if there is no gain in pleasure either directly or indirectly from such behavior, it's a waste of time and not desirable. What about the concept of learning? What about instances where learning isn't associated with gain in pleasure at all (directly or indirectly, absolutely no increased utility later on)? Is it a waste of time as well, or is learning an experience worth having or pursuing, even if it had absolutely no connection to pleasure at all? Despite being a very curious person I'd say that's waste of time as well. I'm thinking of learning something (presumably) entirely pointless like endless lists full of names and scores of sport stars. Complete waste
0Hul-Gil13y
That's a very good point, and I'm not sure why I didn't think to rephrase that sentence. I even state, later in the post, that in the case of the drug example one would still want something that provides no pleasure. (In both that example and the smoking example, I might say our brains are being "fooled" by the chemicals involved, by interpreting them as a result of pleasurable activity; but I don't know if this is correct.) I was thinking of "desire" in terms of "liking", I think: I meant my sentence to mean "...doesn't seem to support that we would like anything except that which gives us pleasure." This is, however, a problem with my phrasing, and not one with the idea I was trying to convey. I hope the rest of my post makes my actual viewpoint clear - as it seems to, since you have indeed understood me correctly. The main thrust of the post was supposed to be that pleasure is still the "Holy Grail." I will rephrase that sentence to "the rest of the post doesn't seem to support the idea that we intrinsically value things other than pleasure." (A bit off topic: as I said, though, I still wouldn't get in the experience machine, because how I obtain my pleasure is important to me... or so it seems. I sympathize with your cigarette problem, if it's not just an example; I used to have an opioid problem. I loved opioids for the pure pleasure they provided, and I still think about them all the time. However, I would never have been content to be given an endless supply of morphine and shot off into space: even while experiencing pleasure as pure as I've ever felt it, I wanted to talk to people and write and draw. It seems like the opioid euphoria hit a lot of my "pleasure centers", but not all of them.)
1Friendly-HI13y
Thankfully the cigarette problem isn't mine, I have no past and hopefully no future of addiction. But I know how stupendously hard it can be to jump over one's shadow and give up short-term gratifications for the benefit of long-term goals or payoffs. I'm a rather impulsive person, but thankfully I never smoked regularly and I stopped drinking alcohol when I was 15 (yeah I can only guess how this would sound in a country where you're legally prohibited from alcohol consumption until age 21). I felt that my future would go down the wrong path if I continued drinking with my "friends", so I used a temporary medical condition as alibi for the others as well as myself to never drink again. Seven years of not drinking at all followed, then I carefully started again in a civilized manner on fitting occasions. Alcohol is a social lubricant that's just way too useful to not be exploited. So (un)fortunately I can't empathize with your opium problem from the experience of a full-blown addiction, but only from the experience of having little self-control in general.

I'd get in Nozick's machine for the wireheading. I figure it's likely enough that I'm in a simulation anyway, and his simulation can be better than my current one. I figure I'm atypical though.

0Ivan_Tishchenko13y
Really? So you're ready to give up that easily? For me, best moments in life are not those when I experience 'intense pleasure'. Life for me is like, you know, in some way, like playing chess match. Or like creating some piece of art. The physical pleasure does not count as something memorable, because it's only a small dot in the picture. The process of drawing the picture, and the process of seeing how your decisions and plans are getting "implemented" in a physical world around me -- that's what counts, that's what makes me love the life and want to live it. And from this POV, wireheading is simply not an option.

I got an experience machine in the basement that supplies you with loads and loads of that marvelously distinct feeling of "the process of painting the picture" and "seeing how your decisions and plans are getting implemented in a semi-physical world around you". Your actions will have a perfectly accurate impact on your surroundings and you will have loads of that feeling of control and importance that you presumably believe is so important for your happiness.

Now what?

6barrkel13y
It's not about giving up. And it's also not about "intense pleasure". Video games can be very pleasurable to play, but that's because they challenge us and we overcome the challenges. What if the machine was reframed as reliving your life, but better tuned, so that bad luck had significantly less effect, and the life you lived rewarded your efforts more directly? I'd probably take that, and enjoy it too. If it was done right, I'd probably be a lot healthier mentally as well. I think the disgust at "wireheading" relies on some problematic assumptions: (1) that we're not already "wireheading", and (2) that "wireheading" would be a pathetic state somewhat like being strung out on heroin, or in an eternal masturbatory orgasm. But any real "wireheading" machine must directly challenge these things, otherwise it will not actually be a pleasurable experience (i.e. it would violate its own definition). As Friendly-HI mentions elsewhere, I think "wireheading" is being confusingly conflated with the experience machine, which seems to be a distinct concept. Wireheading as a simple analogue of the push-button-heroin-dose is not desirable, I think everyone would agree. When I mention "wireheading" above, I mean the experience machine; but I was just quoting the word you yourself used.
2teageegeepea13y
I don't play chess or make art. I suppose there's creativity in programming, but I've just been doing that for work rather than recreationally. Also, I agree with Friendly-HI that an experience machine could replicate those things.

This sounds to me like a word game. It depends on what the initial intention for 'pleasure' is. If you say the device gives 'maximal pleasure' meaning to point to a cloud of good-stuffs and then you later use a more precise meaning for pleasure that is an incomplete model of the good-stuffs, you are then talking of different things.

The meaningful thought experiment for me is whether I would use a box that maximized pleasure\wanting\desire\happiness\whatever-is-going-on-at-the-best-moments-of-life while completely separating me as an actor or participant fr... (read more)

We have an experience machine at our disposal if we could figure out the API. Ever have a lucid dream?

Indeed, it appears that mammals can come to want something that they have never before experienced pleasure when getting.

Duh - otherwise sexual reproduction in mammals would be a non-starter.

While the article shows with neat scientific references that it is possible to want something that we don't end up liking, this is irrelevant to the problem of value in ethics, or in AI. You could as well say without any scientific studies that a child may want to put their hand in the fire and end up not liking the experience. It is well possible to want something by mistake. But it is not possible to like something by mistake, as far as I know. Differently from wanting, "liking" is valuable in itself.

Wanting is a bad thing according to Epicurus... (read more)

When people say "pleasure" in this context, they usually just mean to refer to whatever it is that the human brain likes internally. To then say that people don't just like pleasure - since they also like happiness/bliss - or whatever - seems to be rather missing the point.

As for the rather distinct claim that people want external success, not just internal hedonistic pleasure signals - that seems to depend on the person under consideration. Few want their pleasure to end with them being fired, and running out of drug money (though we do still ... (read more)

Disclaimer: I don't think that maximizing pleasure is an FAI solution; however, I didn't find your arguments against it convincing.

With regards to the experience machine, further study has found that people's responses are generally due to status quo bias; a more recent study found that a slight majority of people would prefer to remain in the simulation.

With regards to the distinction between desire and pleasure: well, yes, but you seem to be just assuming that our desires are what ought to be satisfied/maximized instead of pleasure; I would assume that m... (read more)

0Ivan_Tishchenko13y
I believe lukeprog was talking about what people think before they get wireheaded. It's very probable that once one gets hooked to that machine, one changes ones mind -- based on new experience. It's certainly true for rats which could not stop hitting the 'pleasure' button, and died of starvation. This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.
1teageegeepea13y
Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now? Perhaps rather than asking will you enter or leave the simulation it might be better to start with a person inside it, remove them from it, and then ask them if they want to go back.
5Vaniver13y
Changing your mind based on evidence and experiences are different. I am confident that if I eat a meal, my hunger will decrease. Does that mean I should update my hunger downward now without eating? I can believe "If I wireheaded I would want to continue wireheading" and "I currently don't want to wirehead" without contradiction and without much pressure to want to wirehead.
0AmagicalFishy13y
One's hunger isn't really an idea of the mind that one can change, yeah? I'd say that "changing your mind" (at least regarding particular ideas and beliefs) is different than "changing a body's immediate reaction to a physical state" (like lacking nourishment: hunger).
2Will_Sawin13y
If you conducted brain surgery on me I might want different things. I should not want those things now - indeed, I could not, since there are multiple possible surgeries. "Wireheading" explicitly refers to a type of brain surgery, involving sticking wires in ones head. Some versions of it may not be surgical, but the point stands.
0barrkel13y
I think we're talking about an experience machine, not a pleasure button.
0Zetetic13y
It was my understanding that the hypothetical scenario ruled this out (hence the abnormally long lifespan). In any event, an FAI would want to maximize its utility, so making its utility contingent on the amount of pleasure going on it seems probable that it would want to make as many humans as possible and make them live as long as possible in a wirehead simulation.

Intuitively, this feels true. I rarely do things based on how much pleasure they bring me. Some of my decisions are indirectly linked to future pleasure, or other people's pleasure, i.e. choosing to work 6 am shifts instead of sleeping in because then I won't be poor, or doing things I don't really want to but said I would do because other people are relying on me and their plans will be messed up if I don't do it, and I wouldn't want them to do that to me... Actually, when I think about it, an awful lot of my actions have more to do with other people's pleasure than with my own, something which the pleasure machine doesn't fulfill. In fact, I would worry that a pleasure machine would distract me from helping others.

I feel like I am missing something. You separated pleasure from wanting.

I don't see how this backs up your point though. Unless the machine offered is a desire-fulfilling machine and not a pleasure machine.

If it is a pleasure machine, giving pleasure regardless of the state of wanting, why would we turn it down? You said we usually want more than just pleasure, because getting what we want doesn't always give us pleasure. If wanting and pleasure are different, then of course this makes sense.

But saying we want more than pleasure? That doesn't make sense. Y... (read more)

5ArisKatsaris13y
Where is the point of your confusion? Why do you assume people only want pleasure? If you give me a choice between living a perfectly pleasurable life for a hundred years, but the whole humankind dies horribly afterwards, and living an average life but the rest of humankind keeps surviving and progressing indefinitely -- I WANT THE SURVIVAL OF MANKIND. That's because I don't want just pleasure. I want more than pleasure. No, even with perfect and certain knowledge, we would want more than pleasure. What's the hard thing to understand about that? We are built to want more than a particular internal state of our own minds. Most of us aren't naturally built for solipsism. Like e.g. a machine that kills a man's children, but gives him pleasure by falsely telling him they are living happily ever after and erasing any memories to the contrary. In full knowledge of this, he doesn't want that. I wouldn't want that. Few people would want that. Most of us aren't built for solipsism.
1tyrsius13y
You are using a quite twisted definition of pleasure to make your argument. For most of us, the end of mankind causes great displeasure. This should factor into your equation. Its also not part of Luke's original offer. If you gave me that option I would not take it, because it would be a lie that I would receive pleasure from the end of mankind. Killing a man's children has the same problem. Why to argue against me to you have to bring murder or death into the picture? Luke's original question has no such downsides, and introducing them changes the equation. Stop moving the goalposts. Luke's article clearly separates want from pleasure, but you seem attached to "wanting." You think you want more than pleasure, but what else is there? I believe if you consider any answer you might give to that question, the reason will be because those things cause pleasure (including the thought "mankind will survive and progress"). I am interested in your answers nonetheless.
5ArisKatsaris13y
Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal? I wouldn't. Can you explain to me why I wouldn't, if you believe the only thing I can want is pleasure? Giving additional examples, based on the same principle, isn't "moving the goalposts". Because the survival of your children and the community is the foremost example of a common value that's usually placed higher than personal pleasure. Knowledge, memory, and understanding. Personal and collective achievement. Honour. Other people's pleasure. As an automated process we receive pleasure when we get what we want, that doesn't mean that we want those things because of the pleasure. At the conscious level we self-evidently don't want them because of the pleasure, or we'd all be willing to sacrifice all of mankind if they promised to wirehead us first.

Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal?

I wouldn't. Can you explain to me why I wouldn't, if you believe the only thing I can want is pleasure?

Maybe you're hyperbolically discounting that future pleasure and it's outweighed by the temporary displeasure caused by agreeing to something abhorrent? ;)

1Ghatanathoah11y
I think that if an FAI scanned ArisKatsaris' brain, extrapolated values from that, and then was instructed to extrapolate what a non-hyperboli- discounting ArisKatsaris would choose, it would answer that ArisKatsaris would not choose to get rewired to receive pleasure from the end of mankind. Of course, there's no way to test such a hypothesis.
1Amanojack13y
Plus we have a hard time conceiving of what it would be like to always be in a state of maximal, beyond-orgasmic pleasure. When I imagine it I cannot help but let a little bit of revulsion, fear, and emptiness creep into the feeling - which of course would not be actually be there. This invalidates the whole thought experiment to me, because it's clear I'm unable to perform it correctly, and I doubt I'm uncommon in that regard.
1Hul-Gil13y
No, but that's because I value other people's pleasure as well. It is important to me to maximize all pleasure, not just my own.
1Alicorn13y
What if everybody got the rewiring?
1Hul-Gil13y
How would that work? It can't be the end of mankind if everyone is alive and rewired!
2Alicorn13y
They get five minutes to pleasedly contemplate their demise first, perhaps.
2Hul-Gil13y
I think there would be more overall pleasure if mankind continued on its merry way. It might be possible to wirehead the entire human population for the rest of the universes' lifespan, for instance; any scenario which ends the human race would necessarily have less pleasure than that. But would I want the entire human race to be wireheaded against their will? No... I don't think so. It's not the worst fate I can think of, and I wouldn't say it's a bad result; but it seems sub-optimal. I value pleasure, but I also care about how we get it - even I would not want to be just a wirehead, but rather a wirehead who writes and explores and interacts. Does this mean I value things other than pleasure, if I think it is the Holy Grail but it matters how it is attained? I'm not certain. I suppose I'd say my values can be reduced to pleasure first and freedom second, so that a scenario in which everyone can choose how to obtain their pleasure is better than a scenario in which everyone obtains a forced pleasure, but the latter is better than a scenario in which everyone is free but most are not pleasured. I'm not certain if my freedom-valuing is necessary or just a relic, though. At least it (hopefully) protects against moral error by letting others choose their own paths.
2CG_Morton13y
The high value you place on freedom may be because, in the past, freedom has tended to lead to pleasure. The idea that people are better suited to choosing how to obtain their pleasure makes sense to us now, because people usually know how best to achieve their own subjective pleasure, whereas forced pleasures often aren't that great. But by the time wireheading technology comes around, we'll probably know enough about neurology and psychology that such problems no longer exist, and a computer could well be trusted to tell you what you would most enjoy more accurately than your own expectations could. I agree with the intuition that most people value freedom, and so would prefer a free pleasure over a forced one if the amount of pleasure was the same. But I think that it's a situational intuition, that may not hold in the future. (And is a value really a value if it's situational?)
1tyrsius13y
All of your other examples are pleasure causing. Don't you notice that? Again, getting my brain rewired is not in the original question. I would decline getting my brain rewired; that seems like carte blanche for a lot of things that I cannot predict. I would decline. Survival of the community and children, knowledge, and understanding all bring me pleasure. I think if those things caused me pain, I would fight them. In fact, I think I have good evidence for this. When cultures have a painful response to the survival of OTHER cultures, they go to war. When people see pain for "enemies" they do not sympathize. When it is something you self-identify with, your own culture, only then does it cause pleasure. Those things you cite are valued because they cause pleasure. I don't see any evidence that when those things cause pain, that they are still pursued. @CuSithBell: I agree. --Sorry, I don't know how to get the quote blocks, or I would respond more directly.
2ArisKatsaris13y
No, they cause pleasure because they're valued. * You are arguing that we seek things in accordance to and proportionally to the pleasure anticipated in achieving them. (please correct me if I'm getting you wrong) * I'm arguing that we can want stuff without anticipation of pleasure being necessary. And we can fail to want stuff where there is anticipation of pleasure. How shall we distinguish between the two scenarios? What's our anticipations for the world if your hypothesis is true vs if mine is true? Here's a test. I think that if your scenario held, everyone would be willing to rewire their brains to get more pleasure for things they don't currently want; because then there'd be more anticipated pleasure. This doesn't seem to hold -- though we'll only know for sure when the technology actually becomes available. Here's another test. I think that if my scenario holds, some atheists just before their anticipated deaths would still leave property to their offspring or to charities, instead of spending it all to prostitutes and recreational drugs in attempts to cram as much pleasure as possible before their death. So I think the tests validate my position. Do you have some different tests in mind?
1tyrsius13y
Your argument isn't making any sense. Whether they are valued because they cause pleasure, or cause pleasure because they are valued makes no difference. Either way, they cause pleasure. Your argument is that we value them even though they don't cause pleasure. You are trying to say there is something other than pleasure, yet you concede that all of your examples cause pleasure. For your argument to work, we need to seek something that does not cause pleasure. I asked you to name a few, and you named "Knowledge, memory, and understanding. Personal and collective achievement. Honour. Other people's pleasure." Then in your next post, you say " they cause pleasure because they're valued." That is exactly my point. There is nothing we seek that we don't expect to derive pleasure from. I don't think your tests validate your position. The thought of leaving their belongings to others will cause pleasure. Many expect that pleasure to be deeper or more meaningful that prostitutes, and would therefore agree with your test while still holding to my position that people will seek the greatest expect pleasure. I would place the standard of a Matrix-quality reality machine to accept lukeprogs offer. An orgasmium would not suffice, as I expect it to fail to live up to its promise. Wireheading would not work. Double Edit to add a piece then fix the order it got put in. Edit Again- Apologies, I confused this response with one below. Edited to remove confusion.
1ArisKatsaris13y
If I was debating the structure of the atom, I could say that "there's more to atoms than their protons", and yet I would 'concede' that all atoms do contain protons. Or I'd say "there's more to protons than just their mass" (they also have an electric charge), but all protons do have mass. Why are you finding this hard to understand? Why would I need to discover an atom without protons or a proton without mass for me to believe that there's more to atoms than protons (there's also electrons and neutrons) or more to protons than their mass? You had made much stronger statements than that -- you said "You think you want more than pleasure, but what else is there?" You also said "But saying we want more than pleasure? That doesn't make sense. " Every atom may contain protons, but atoms are more than protons. Every object of our desire may contain pleasure in its fullfillment, but the object of our desire is more than pleasure. Does this analogy help you understand how your argument is faulty?
1tyrsius13y
No, it doesn't. I understand your analogy (parts vs the whole), but I do not understand how it relates to my point. I am sorry. Is pleasure the proton in the analogy? Is the atom what we want? I don't follow here. You are also making the argument that we want things that don't cause pleasure. Shouldn't this be, in your analogy, an atom without a proton? In that case yes, you need to find an atom without a proton before I will believe there is an atom without a proton. (This same argument works if pleasure is any of the other atomic properties. Charge, mass, etc). Or is pleasure the atom? If that is the case, then I can't see where you argument is going. If pleasure is the atom, then your analogy supports my argument. I am not trying to make a straw man, I genuinely don't see the connections.
2Ghatanathoah11y
ArisKateris' analogy is: 1. The reasons we want things are atoms. 2. Pleasure is protons. 3. Atoms have more components than protons. 4. Similarly, we want things for more reasons other than the pleasure they give us. 5. Even if every time one of our desires is satisfied, we feel pleasure, doesn't mean that pleasure is the only reason we have those desires. Similarly, even if an atom always has protons, doesn't mean it doesn't also have other components. ArisKateris should have picked electrons instead of protons, it makes the analogy a little less confusing. Desires without pleasure are like atoms without electrons. These are called "positive ions" and are not totally uncommon. It personally seems obvious to me that we want things other than pleasure. For instance, I occasionally read books that I hate and am miserable reading because they are part of a series, and I want to complete the series. That's what I want, and I don't care if there's less pleasure in the universe because of my actions.
1ArisKatsaris13y
After you click "Reply", you can click on "Help" at the bottom right of the textbox and see the available formatting options. To add quotes you just need to place a "> " at the beginning of a line.

Oddly, those two arguments end up cancelling out for me.

You explained how pleasure from our natural environment "caps out" past a certain threshold - I can't eat infinity sugar and derive infinity pleasure. So, obviously, my instinctive evaluation is that if I get wire-headed, I'll eventually get sick of it and want something else!

Equally, your research shows that we're not always perfect at evaluating what I want. Therefore, I'd have an instinctive aversion to wire-heading because I might have guessed wrong, and it's obviously very difficult to ... (read more)

5Friendly-HI13y
"You explained how pleasure from our natural environment "caps out" past a certain threshold - I can't eat infinity sugar and derive infinity pleasure. So, obviously, my instinctive evaluation is that if I get wire-headed, I'll eventually get sick of it and want something else!" I think you're lumping the concept of wireheading and the "experience machine" into one here. Wireheading basically consists of you pushing down a button because you want to, not because you like to do it. It would basically feel like you're a heroin junkie, but instead of needles it's pressing buttons for you. The experience machine on the other hand is basically a completely immersive virtual reality that satisfies all your desires in any way necessary to make you happy. It's not required that you'll be in an orgasmic state all the time... as you said yourself, you may get bored with that (I just think of that poor woman with a disorder, that makes her orgasm every few minutes and she apparently doesn't like it at all). In the experience machine scenario, you would never get bored - if you desire some form of variety in your "perfect experience" and would be unhappy without it, then the machine would make everything to make you happy nontheless. The point of the machine is that it gives you whatever you desire in just the right amounts to max you out on pleasure and happiness, whatever means necessary and regardless of how convoluted and complex the means may have to be. So if you're hooked up to the machine, you feel happy no matter what. The point is that your pleasure doesn't build on achievements in the real world and that there may perhaps be other meaningful things you may desire apart from pleasure. As we've seen from luke, there appear to be at least two other human desires next to pleasure - namely "wanting" and "learning". But if the machine is capable of conjuring up any means of making me happy, then it perhaps would have to throw a bit of wanting and learning into the mix
4Friendly-HI13y
By the way, I'm beginning to think that the experience machine would be a really sweet deal and I may take it if it was offered to me. Sure, my happiness wouldn't be justified by my real-world achievements but so what? What's so special about "real" achievements? Feeling momentarily happier because I gain money, social status and get laid... sure there's some pride and appeal in knowing I've earned these things due to my whatever, but in what kind oftranscendent way are these things really achievements or meaningful? My answer would be that they aren't meaningful in any important way, they are simply primitive behaviors based on my animalistic nature and the fact that my genes fell out of the treetops yesterday. I struggle to see any worthwile meaning in these real "achievements". They can make you feel good and they can make you feel miserable, but at the end of the day they are perfectly transparent apeish behaviors based on reproductive urges which I simply can't outgrow because of my hardwired nature. The only meaningful activity that would be worth leaving my experience machine for would be to tackle existential risks... just so that I can get back to my virtual world and enjoy it "indefinitely". Personally though, I have the feeling that it would still be a lot cleverer to redesign my own brain from the ground up to make it impervious to any kind of emotional trauma or feelings of hurt, and to make it run entirely on a streamlined and perfectly rational "pleasure priority hierarchy". No pain, all fun, and still living in the real world - perhaps with occasional trips into virtual reality to spice things up. But I find it really hard to imagine how I could still value human life, if I would measure everything on a scale of happiness and entirely lacked the dimension of pain. Can one still feel the equivalent of compassion without pain? It's hard to see myself having fun at the funeral of my parents. Less fun than if they were still alive of cause, but it w
6Hul-Gil13y
Well, I think you could still feel compassion, or something like it (without the sympathy, maybe; just concern) - even while happy, I wouldn't want someone else to be unhappy. But on the other hand, it does seem like there's a connection, just because of how our brains are wired. You need to be able to at least imagine unhappiness for empathy, I suppose. I read an article about a man with brain damage, and it seems relevant to this situation. Apparently, an accident left him with damage to a certain part of his brain, and it resulted in the loss of unhappy emotions. He would constantly experience mild euphoria. It seems like a good deal, but his mother told a story about visiting him in the hospital; his sister had died in the meantime, and when she told him, he paused for a second, said something along the lines of "oh" or "shame"... then went back to cracking jokes. She was quoted as saying he "didn't seem like her son any more." I've always felt the same way that you do, however. I would very much like to redesign myself to be pain-free and pleasure-maximized. One of the first objections I hear to this is "but pain is useful, because it lets you know when you're being damaged." Okay - then we'll simply have a "damage indicator", and leave the "pull back from hot object" reflex alone. Similarly, I think concerns about compassion could be solved (or at least mitigated) by equipping ourselves with an "off" switch for the happiness - at the funeral, we allow ourselves sadness... then when the grief becomes unbearable, it's back to euphoria.
1Friendly-HI13y
Very good real world example about the guy with brain damage! Interesting case, any chance of finding this story online? A quick and dirty google search on my part didn't turn up anything. Also, nice idea with the switch. I fully acknowledge, that there are some situations when I somehow have the need to feel pain - funerals being one occasion. Your idea with the switch would be brilliantly simple. Unfortunately, my spider-senses tell me the redesigning part itself will be anything but. Case studies of brain damage are pure gold when it comes to figuring out "what would happen to me if I remove/augment my brain in such and such a way".
2Hul-Gil13y
I was about to come back (actually on my way to the computer) and regretfully inform you that I had no idea where I had seen it... but then a key phrase came back to me, and voila! (I had the story a little wrong: it was a stroke that caused the damage, and it was a leukemia relapse the sister had.) The page has a lot of other interesting case studies involving the brain, as well. I need to give the whole site a re-browse... it's been quite a while since I've looked at it. I seem to remember it being like an atheism-oriented LessWrong.
1Friendly-HI13y
Thank you very much for going through the trouble of finding all these case-studies! :) (For anyone else interested, I should remark these aren't the actual studies, but quick summaries within an atheistic context that is concerned with disproving the notion of a soul - but there are references to all the books within which these symptoms are described.) The Alien Hand Syndrome is always good for a serious head-scratching indeed.
3handoflixue13y
Exactly! My intuition was wrong; it's trained on an ancestral environment where that isn't true, so it irrationally rejects the experience machine as "obviously" suffering from the same flaw. Now that I'm aware of that irrationality, I can route around it and say that the experience machine actually sounds like a really sweet deal :)

We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don't see an easy argument from "X is a terminal value for many people" to "X should be promoted by the FAI." Are you supposing a sort of idealized desire fulfilment view about value? That's fine--it's a sensible enough view. I just wouldn't have thought it so obvious that it would be a good idea to go around invisibly assuming it.

Is their meaningful data on thalamic stimulators with erodic side-effects? (See entry #1 here: http://www.cracked.com/blog/5-disturbing-ways-the-human-body-will-evolve-in-the-future/ ). Cracked gives the addictive potential of an accidental orgasm switch plenty of attention while citing just two examples (it's a comedy site after all), but have other cases been studied? I'm not convinced this couldn't be done intentionally with current models of the brain.

Most people say they wouldn't choose the pleasure machine.

Well that was easy. In my (limited) experience most people making such claims do not really anticipate being pleasure-maximized, and thus can claim to want this without problems. It's only "real" ways of maximizing pleasure that they care about, so you need to find a "real" counterexample.

That said, I have less experience with such people than you, I guess, so I may be atypical in this regard.

Going for what you "want" is merely going for what you like the thought of. To like the thought of something is to like something (in this case the "something" that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no suc... (read more)

0nshepperd13y
Isn't this just a way of saying that people like the thought of getting what they want? Indeed, it would be rather odd if expecting to get what we want made us unhappy. See also here, I guess.
0Uni13y
No, I didn't just try to say that "people like the thought of getting what they want". The title of the article says "not for the sake of pleasure alone". I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All "wants" are servile consequences of "likes"/"dislikes", so I think "wants" should be treated as mere transitional steps, not as initial causes of our decisions.
1nshepperd13y
You've just shown that wanting and liking go together, and asserted that one of them is more fundamental. Nothing which you have written appears to show that it's impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good. And nevertheless, people still don't just optimize for pleasure, since they would take the drug mentioned, despite the fact that doing so is far less pleasurable than the alternative, even if the "pleasure involved in deciding to do so" is taken into account. Sure, you can say that only the "pleasure involved in deciding" or "liking the thought of" is relevant, upon which your account of decision making reduces to (something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it, and of course you still haven't looked inside the black box (something about X). Or you can suggest that people are just mistaken about how pleasurable the results will be of any action they take that doesn't maximise pleasure. But at that point you're trying to construct sensible preferences from a mind that appears to be wrong about almost everything including the blatantly obvious, and I have to wonder exactly what evidence in this mind points toward the "true" preferences being "maximal pleasure".
0Uni13y
I'm not trying to show that. I agree that people try to get things they want, as long as with "things they want" we mean "things that they are tempted to go for because the thought of going for those things is so pleasurable". Why would you want to eliminate the pleasure involved in decision processes? Don't you feel pleasure has intrinsic value? If you eliminate pleasure from decision processes, why not eliminate it altogether from life, for the same reasons that made you consider pleasure "unnecessary" in decision processes? This, I think, is one thing that makes many people so reluctant to accept the idea of human-level and super-human AI: they notice that many advocates of the AI revolution seem to want to ignore the subjective part of being human and seem interested merely in how to give machines the objective abilities of humans (i.e. abilities to manipulate the outer environment rather than "intangibles" like love and happiness). This seems as backward as spending your whole life earning millions of dollars, having no fun doing it, and never doing anything fun or good with the money. For most people, at first at least, the purpose of earning money is to increase pleasure. So should the purpose of building human-level or super-human AI. If you start to think that step two (the pleasure) in decision processes is an unnecessary part of our decision processes and can be omitted, you are thinking like the money-hunter who has lost track of why money is important; by thinking that pleasure may as well be omitted in decision processes, you throw away the whole reason for having any decision processes at all. It's the second step (of your three steps above) - the step which is always "I like the though of...", i.e. our striving to maximize pleasure - that determines our values and choices about whatever there is in the first step ("X" or "something about X", the thing we happen to like the thought of). So, to the extent that the first step ("something about X") is
1nshepperd13y
You're missing the point, or perhaps I'm missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser. So, I think you're confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a "pleasure" feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn't mean it's wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent. This is what I mean by "(I like the thought of X) would seem to be an unnecessary step". The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn't tell you what should feel pleasurable and what shouldn't, it just tells you a bit about the mechanisms. Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can't believe pleasure is the only thing with intrinsic value. We don't use nozick's pleasure machine, we don't choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don't think any of those things are mistakes.

I notice that I'm a bit confused, especially when reading, "programming a machine superintelligence to maximize pleasure." What would this mean?

It also seems like some arguments are going on in the comments about the definition of "like", "pleasure", "desire" etc. I'm tempted ask everyone to pull out the taboo game on these words here.

A helpful direction I see this article pointing toward, though, is how we personally evaluate an AI's behavior. Of course, by no means does an AI have to mimic human internal workings 1... (read more)

I've thought of a bit of intuition here, maybe someone will benefit by it or be kind enough to critique it;

Say you took two (sufficiently identical) copies of that person C1 and C2, and exposed C1 to the wirehead situation (by plugging them in) and showed C2 what was happening to C1.

It seems likely that C1 would want to remain in the situation and C2 would want to remove C1 from the wirehead device. This seems to be the case even if the wirehead machine doesn't raise dopamine levels very much and thus the user does not become dependent on it.

However, even ... (read more)

0MaoShan13y
Sensible, maybe, but pointless in my opinion. Once you have C1's approval, then any additional improvements (wouldn't C2 like to see what C3 would be like?) would be from C2's perspective, which naturally would be different from C1's perspective, and turtles all the way down. So it would be deceptive to C1 to present him with C2's results, if any incremental happiness were still possible, C2 would naturally harbor the same wish for improvement which caused C1 to accept it. All it would be doing would be shielding C1's virgin imagination from C5822.
0Zetetic13y
I'm not sure you're talking about the same thing I am, or maybe I'm just not following you? There is only C1 and C2. C2 serves as a grounding that checks to see if what it would pick given the experiences it went through is acceptable to C1's initial state. C1 would not have the "virgin imagination", it would be the one hooked up to the wirehead machine. Really I was thinking about the "Last Judge" idea from the CEV, which (as I understand it, but it is super vague so maybe I don't) basically somehow has someone peek at the solution given by the CEV and decide whether the outcome is acceptable from the outside.
0MaoShan13y
Aside from my accidental swapping of the terms (C1 as the judge, not C2), I still stand by my (unclear, possibly?) opinion. In the situation you are describing, the "judge" would never allow the agent to change beyond a very small distance that the judge is comfortable with, and additional checks would never be necessary, as it would only be logical that the judge's opinion would be the same every time that an improvement was considered. Whichever of the states that the judge finds acceptable the first time, should become the new base state for the judge. Similarly, in real life, you don't hold your preferences to the same standards that you had when you were five years old. The gradual improvements in cognition usually justify the risks of updating one's values, in my opinion.
[-][anonymous]13y-10

How about a machine that maximizes your own concept of pleasure and makes you believe that it is probably not a machine simulation (or thinks that machine simulation is an irrelevant argument)?

[This comment is no longer endorsed by its author]Reply

Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with. Would you let them do this for you?

Most people say they wouldn't choose the pleasure machine.

Well, no wonder. The way the hypothetical scenario is presented evokes a giant array of ways it could go wrong.

What if the pleasure machine doesn't work? What if it fails after a month? What if it works for 100 years, followed by 1,000 years of loneliness and suffering?

Staying on Earth sounds a lot safer.

Suppose the person you are ask... (read more)

2Ghatanathoah12y
Please don't fight the hypothetical. I think it likely that the people Luke spoke with were likely intelligent people who knew that hypotheticals are supposed to test your values and priorities and responded in the spirit of the question. Many people become addicted to drugs, and end up using them nearly 100% of the time. That doesn't mean that's what they really want, it just means they don't have enough willpower to resist. How humans would behave if encountered with a pleasure machine is not a reliable guide to how humans would want to behave if they were encountered with it, in the same way that the way humans would behave if encountered with heroin is not a reliable guide to how humans would want to behave when encountered with heroin. There are lots of regretful heroin users. Wouldn't it be even better to constantly be feeling this bliss, but also still mentally able to pursue non-pleasure related goals? I might not mind engineering the human race to feel pleasure more easily, as long as we were still able to pursue other worthwhile goals.
1denisbider11y
Sorry for the late reply, I haven't checked this in a while. Most components of our thought processes are subconscious. The hypothetical question you posed presses a LOT of subconscious buttons. It is largely impossible for most people, even intelligent ones, to take a hypothetical question at face value without being influenced by the subconscious effects of the way it's phrased. You can't fix a bad hypothetical question by asking people to not fight the hypothetical. For example, who wants to spend an eternity isolated in space? That must be one of the worst fears for many people. How do you disentangle that from the question? That's like asking a kid if he wants candy while you're dressed up as a monster from his nightmares. Because not all components of the heroin experience are pleasant. I suppose, yes. Valuable X + valuable Y is strictly better than just valuable X.
1Ghatanathoah11y
When I heard that hypothetical I took the whole "launching you into space" thing as another way of saying "Assume for the sake of the argument that no outside force or person will ever break into the pleasure machine and kill you." I took the specific methodology (launching into space) to just be a way to add a little color to the thought experiment and make it a little more grounded in reality. To me if a different method of preventing interference with the machine had been specified, such as burying the machine underground, or establishing a trust fund that hired security guards to protect it for the rest of your life, my answer wouldn't be any different. I suppose you are right that someone other than me might give the "space launch" details much more salience. As you yourself pointed out in your original post, modifying the experiment's parameters might change the results. Although what I read in this thread makes me think that people might not gradually choose to use the machine all the time after all. Much regret probably comes from things like heroin preventing them from finding steady work, or risks of jailtime. But I think a lot of people also regret not accomplishing goals that heroin distracts them from. Many drug users, for instance, regret neglecting their friends and family. I agree. I would think it terrific if people in the future are able to modify themselves to feel more intense and positive emotions and sensations, as long as doing so did not rob them of their will and desire to do things and pursue non-pleasure-related values. I don't see doing that as any different from taking an antidepressant, which is something I myself have done. There's no reason to think our default moods setting are optimal. I just think it would be bad if increasing our pleasure makes it harder to achieve our other values. I think you also imply here, if I am reading you correctly, that a form of wireheading that did not exclude non-pleasure experiences would be vast
0A1987dM11y
In order to be happy (using present-me's definition of “happy”) I need to interact with other people. So there's no way for a holodeck to make me happy unless it includes other people.
0Ghatanathoah11y
I agree. Interacting with other people is one of the "non-pleasure-related values" that I was talking about (obviously interacting with other people brings me pleasure, but I'd still want to interact with others even if I had a drug that gave me the same amount of pleasure). So I wouldn't spend my life in a holodeck unless it was multiplayer. I think that during my discussion with denisbider at some point the conversation shifted from "holodeck" to "wireheading." I think that the present-you's definition of "happy" is closer to the present-me's definition of "satisfaction." I generally think of happiness as an emotion one feels, and satisfaction as the state where a large amount of your preferences are satisfied.
0A1987dM11y
Yes. (I think the standard way of distinguishing them is to call yours hedonic happiness and mine eudaimonic happiness, or something like that.)

The pleasure machine argument is flawed for a number of reasons:

1) It assumes that, despite having never been inside the pleasure machine, but having lots of experience of the world outside of it, you could make an unbiased decision about whether to enter the pleasure machine or not. It's like asking someone if he would move all his money from a bank he knows a lot about to a bank he knows basically nothing about and that is merely claimed to make him richer than his current bank. I'm sure that if someone would build a machine that, after I stepped into it... (read more)

1Perplexed13y
Why? Or rather, given that a powerful A.I. is to be built, why is it a bad idea to endow it with human-like values? The locally favored theory of friendly AI is (roughly speaking) that it must have human sympathies, that is, it must derive fulfillment from assisting humans in achieving their values. What flaws do you see in this approach to friendliness?
-1HoverHell13y
-
2nshepperd13y
What are the values you judge those as "wrong" by, if not human? Yes, it's a terrible idea to build an AI that's just a really intelligent/fast human, because humans have all sorts of biases, and bugs that are activated by having lots of power, that would prevent them from optimizing for the values we actually care about. Finding out what values we actually care about though, to implement them (directly, or indirectly through CEV-like programs) is definitely a task that's going to involve looking at human brains.
1Uni13y
Wrong compared to what? Compared to no sympathies at all? If that's what you mean, doesn't that imply that humans must be expected to make the world worse rather than better, whatever they try to do? Isn't that a rather counterproductive belief (assuming that you'd prefer that the world became a better place rather than not)? AI with human sympathies would at least be based on something that is tested and found to work throughout ages, namely the human being as a whole, with all its flaws and merits. If you try to build the same thing but without those traits that, now, seem to be "flaws", these "flaws" may later turn out to have been vital for the whole to work, in ways we may not now see. It may become possible, in the future, to fully successfully replace them with things that are not flaws, but that may require more knowledge about the human being than we currently have, and we may not now have enough knowledge to be justified to even try to do it. Suppose I have a nervous disease that makes me kick uncontrollably with my right leg every once in a while, sometimes hurting people a bit. What's the best solution to that problem? To cut off my right leg? Not if my right leg is clearly more useful than harmful on average. But what if I'm also so dumb that I cannot see that my leg is actually more useful than harmful; what if I can mainly see the harm it does? That's what we are being like, if we think we should try to build a (superhuman) AI by equipping it with only the clearly "good" human traits and not those human traits that now appear to be (only) "flaws", prematurely thinking we know enough about how these "flaws" affect the overall survival chances of the being/species. If it is possible to safely get rid of the "flaws" of humans, future superhuman AI will know how to do that far more safely than we do, and so we should not be too eager to do it already. There is very much to lose and very little to gain by impatiently trying to get everything perfect at o
0HoverHell13y
-
0Uni13y
In trusting your own judgment, that building an AI based on how humans currently are would be a bad thing, you implicitly trust human nature, because you are a human and so presumable driven by human nature. This undermines your claim that a super-human AI that is "merely more of on everything that it is to be human" would be a worse thing than a human. Sure, humans with power often use their power to make other humans suffer, but power imbalances would not, by themselves, cause humans to suffer were not human brains such that they can very easily (even by pure mistake) be made to suffer. The main reason why humans suffer today is how the human brain is hardwired and the fact that there is not yet enough knowledge of how to hardwire it so that it becomes unable to suffer (and with no severe sife-effects). Suppose we build an AI that is "merely more of everything that it is to be human". Suppose this AI then takes total control over all humans, "simply because it can and because it has a human psyche and therefore is power-greedy". What would you do after that, if you were that AI? You would continue to develop, just like humans have always. Every step of your development from un-augmented human to super-human AI would be recorded and stored in your memory, so you could go through your own personal history and see what needs to be fixed in you to get rid of your serious flaws. And when you have achieved enough knowledge about yourself to do it, you would fix those flaws, since you'd still regard them flaws (since you'd still be "merely more of everything that it is to be human" than you are now). You might never get rid of all of your flaws, for nobody can know everything about himself, but that's not necessary for a predominantly happy future for humanity. Humans strive to get happier, rather than specifically to get happier by making others suffer. The fact that many humans are, so far, easily made to suffer as a consequence of (other) humans' striving for happi
0Collins13y
It seems like there's a lot of confusion from the semantic side of things. There are a lot of not-unreasonable definitions of words like "wanting", "liking", "pleasure", and the like that carve the concepts up differently and have different implications for our relationship to pleasure. If they were more precisely defined at the beginning, one might say we were discovering something about them. But it seems more helpful to say that the brain chemistry suggests a good way of defining the terms (to correlate with pleasant experience, dopamine levels, etc), at which point questions of whether we just want pleasure become straightforward.
-2HoverHell13y
-
0[anonymous]13y
Well, congratulations on realizing that “wanting” and “liking” are different.

In the last decade, neuroscience has confirmed what intuition could only suggest: that we desire more than pleasure. We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.

Either this conclusion contradicts the whole point of the article, or I don't understand what is meant by the various terms "desire", "want", "pleasure", etc. If pleasure is "that which we like", then yes we can solve FAI by programming an AI to maximize pleasure.

The mis... (read more)

3nshepperd12y
"Desire" denotes your utility function (things you want). "Pleasure" denotes subjectively nice-feeling experiences. These are not necessarily the same thing. There's nothing superstitious about caring about stuff other than your own mental state.
-3koning_robot12y
Indeed they are not necessarily the same thing, which is why my utility function should not value that which I "want" but that which I "like"! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to "just" program an AI to maximize pleasure. What we "want" may be complex, but what we "like" may be simple. In fact, that would be better than programming an AI to make the world into what we "want" but not necessarily "like". If you mean that others' mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn't possibly matter.
5nshepperd12y
Why's that?
0koning_robot12y
A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters. Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?
2DaFranker12y
You haven't read or paid much attention to the metaethics sequence yet, have you? Or do you simply disagree with pretty much all the major points of the first half of it? Also relevant: Joy in the merely real
0koning_robot12y
I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn't find any arguments for the insistence on placating wants rather than improving experience. I'll read it in full next week.
1DaFranker12y
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim's proponent. I think you may be confusing labels and concepts. Maximizing hedonistic mental states means, to the best of my knowledge, programming a hedonistic imperative directly into DNA for full-maximal state constantly from birth, regardless of conditions or situations, and then stacking up humans as much as possible to have as many of them as possible feeling as good as possible. If any of the humans move, they could prove to be a danger to efficient operation of this system, and letting them move thus becomes a net negative, so it follows from this that in the process of optimization all human mobility should be removed, considering that for a superintelligence removing limbs and any sort of mobility from "human" DNA is probably trivial. But since they're all feeling the best they could possibly feel, then it's all good, right? It's what they like (having been programmed to like it), so that's the ideal world, right? Edit: See Wireheading for a more detailed explanation and context of the possible result of a happiness-maximizer.
0koning_robot12y
This comment has justification. I don't see how this would affect evolutionary psychology. I'm not sure if I'm parsing your last sentence here correctly; I didn't "look up" anything, and I don't know what the weak points are. Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that's where we should be headed. One of the explicit truths of your scenario is that "they're all feeling the best they could possibly feel". But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure. You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label "pleasure" refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I'm not saying that is currently the way things are).
2DaFranker12y
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this. There's no superstition of "true/pure/honest/all-natural pleasure" in my model - right now, my current brain feels extreme anti-hedons towards the idea of living in Wirehead Land. Right now, and to my best reasonable extrapolation, I and any future version of "myself" will hate and disapprove of wireheading, and would keep doing so even once wireheaded, if not for the fact that the wireheading necessarily overrides this in order to achieve maximum happiness by re-wiring the user to value wireheading and nothing else. The "weak points" I spoke of is that you consider some "weaknesses" of your position, namely others' mental states, but those are not the weakest of your position, nor are you using the strongest "enemy" arguments to judge your own position, and the other pieces of data also indicate that there's mind-killing going on. The quality of mental states is presumably the only thing we should care about - my model also points towards "that" (same label, probably not same referent). The thing is, that phrase is so open to interpretation (What's "should"? What's "quality"? How meta do the mental states go about analyzing themselves and future/past mental states, and does the quality of a mental state take into account the bound-to-reality factor of future qualitative mental states? etc.) that it's almost an applause light.
-4koning_robot12y
Yes, but they disagree because what they want is not the same as what they would like. The value of others' mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading. I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me? Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of "pleasure" is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value. The burden of proof isn't actually on my side. A priori, nothing has value. I've argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?
2thomblake12y
To a rationalist, the "burden of proof" is always on one's own side.
7Manfred12y
Hm, a bit over-condensed. More like the burden of proof is on yourself, to yourself. Once you have satisfied that, argument should be an exercise in communication, not rhetoric.
2wedrifid12y
Agree completely. This would seem to depend on the instrument goal motivating the argument.
2Vladimir_Nesov12y
Saying a word with emphasis doesn't clarify its meaning or motivate the relevance of what it's intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It's not even clear what each of these means, and these distinctions don't automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don't quite match those of people it designed. See Approving reinforces low-effort behaviors, The Blue-Minimizing Robot, Urges vs. Goals: The analogy to anticipation and belief.
1koning_robot12y
I accept this objection; I cannot describe in physical terms what "pleasure" refers to.
0chaosmosis12y
I think I understand what koning_robot was going for here, but I can't approach it except through a description. This description elicits a very real moral and emotional reaction within me, but I can't describe or pin down what exactly is wrong with it. Despite that, I still don't like it. So, some of the dystopian Fun Worlds that I imagine are rooms where non AI lifeforms have no intelligence of their own anymore, as it was not needed. These lifeforms are incredibly simple and are little more than dopamine receptors (I'm not up to date on the neuroscience of pleasure, I remember its not really dopamine but am not sure what the chemical(s) that correspond to happiness are). The lifeforms are all identical and interchangeable. They do not sing or dance. Yet they are extremely happy, in a chemical sort of sense. Still, I would not like to be one. Values are worth acting on, even if we don't understand them exactly, so long as we understand in a general sense what they tell us. That future would suck horribly and I don't want it to happen.

While humans may not be maximizing pleasure they are certainly maximizing some utility function which can be characterized. Human concerns can then be programmed to optimize this function in your FAI.

0xelxebar13y
You might be interested in Allais paradox, which is an example of humans in fact demonstrating behavior which doesn't maximize any utility function. If you're aware of the Von Neumann-Morgenstern utility function characterization, this becomes clearer than just knowing what a utility function is.
0voi611y
Sorry to respond to this 2 years late. I'm aware of the paradox and the VNM theorem. Just because humans are inconsistent/irrational doesn't mean they're aren't maximizing a utility function however. Firstly, you can have a utility function and just be bad at maximizing it (and yes this contradicts the rigorous mathematical definitions which we all know and love, but we both know how English doesn't always bend to their will and we both know what I mean when I say this without having to be pedantic because we are such gentlemen). Secondly, if you consider each subsequent dollar you attain to be less valuable this makes perfect sense and this is applied in tournament poker where taking 50:50 chance of either going broke or doubling your stack is considered a terrible play because the former outcome guarantees you lose your entire entry fee but the latter gives you an expected winning value that is less than your entry fee. This can be seen with a simple calculation or by just noting that if everyone plays aggressively like this I can do nothing and make into into the prize pool because the other players will simply eliminate each other faster than the blinds will eat away at my own stack. But I digress. Let's cut to the chase here. You can do what you want but you can't choose your wants. Along the same lines a straight man, no matter how intelligent he becomes, will still find women arousing. An AI can be designed to have the motives of a selfless benevolent human (the so called Artificial Gandhi Intelligence) and this will be enough. Ultimately humans want to be satisfied and if it's not in their nature to be permanently so, then they will concede to changing their nature with FAI-developed science.
0CuSithBell13y
That's not exactly true. The Allais paradox does help to demonstrate why explicit utility functions are a poor way to model human behavior, though.