if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there'd be some serious population 'adjustment'.
It doesn't have to tell the monster. (this btw is one wireheading-related issue;
i do quite hate the lingo here though; calling it wireheaded makes it sound like
there isn't a couple thousands years of moral philosophy about the issue and
related issues)
3Emile11y
I'm not aware of an alternative to "wireheading" with the same meaning.
Good one.
http://en.wikipedia.org/wiki/Lotus-eaters
[http://en.wikipedia.org/wiki/Lotus-eaters]
That's the ancient greeks writing about hypothetical wireheads. (the 'moral
philosophy' is perhaps a bad choice of word for search for greek stuff; ethics
is the greek word)
1Emile11y
A bit of search around that showed nearly no reference to lotus eating/lotus
eater in moral philosophy.
Something much closer to "wireheading" would be hedonism, and more specifically
Nozick's Experience Machine
[http://en.wikipedia.org/wiki/The_Experience_Machine], which is pretty much
wireheading, but isn't thousands of years old, and has been referenced here
[http://lesswrong.com/lw/65w/not_for_the_sake_of_pleasure_alone/].
(And the term "wirehead" as used here probably comes from the Known Space
[http://en.wikipedia.org/wiki/Known_Space] stories, so probably predates
Nozick's 1974 book)
3Dmytry11y
Well, for one thing, it ought to be obvious that Mohammed would have banned a
wire into the pleasure centre, but lacking the wires, he just banned the alcohol
and other intoxicants. The concept of 'wrong' ways of seeking the pleasure is
very, very old.
2Rhwawn11y
I don't think you looked very hard - I turned up a few books apparently on moral
philosophy by searching in Google Books for 'moral ("lotus eating" OR
"lotus-eating" OR "lotus eater" OR "lotus-eater")'.
And yes, I'm pretty sure the wirehead term comes from Niven's Known Space. I've
never seen any other origin discussed.
0Desrtopa11y
It would be awfully hard to hide.
Sure, it could lock the monster in an illusory world of optimal happiness, or
just stimulate his pleasure centers directly, etc. But unless we assume that the
AI is working under constraints that prevent that sort of thing, the comic
doesn't make much sense.
0Dmytry11y
There's no clear line between 'hiding' and 'not showing'. You can leave just a
million people or so, to be put around the monster, and simply not show him the
rest. It is not like the AI is making every wall into the screen displaying the
suffering on the construction of pyramids. Or you can kill those people and show
it in such a way that the monster derives pleasure from it. At any rate, anyone
whose death would go unnoticed by the monster, or whose death does not
sufficiently distress the monster, would die, if the AI is to focus on average
pleasure.
edit: I think those solutions really easily come to mind when you know of what a
soviet factory would do to exceed the five year plan.
1Desrtopa11y
The AI explicitly wasn't focused on average pleasure, but on total pleasure, as
measured by average pleasure times the population.
2Dmytry11y
Yep. I was just posting on what average pleasure maximizing AI would do, that
isn't part of the story.
0nonplussed10y
You're all wrong — if the happiness of the utility monster compounds as the
comic says, then you get greater happiness out of lumping it all into one
monster rather than cloning.
0Jonathan_Graehl11y
Whoops. Panel 3 (y axis caption) and 6 (suicide not allowed) indeed make that
clear.
It is a good thing that you are thinking good things about Felix. This means he
is happier if you aren't in corn field since you are a good person with no bad
thoughts.
6NancyLebovitz11y
I'm not sure why the down vote.
If it helps, Konkavistador and I are referring to a classic horror story called
"It's a Good Life".
Felix means happy (or lucky), and is the origin of the word felicity. It took me a while to realize this, so I thought I would note it. Is it obvious for all native English speakers?
Not obvious to me. I did know the meaning of Felix, but it's deep enough in the
unused drawers of my memory that I might never have made the connection without
someone pointing it out.
2[anonymous]11y
It was obvious to me, I'm not a native English speaker. Anyone knowing a bit of
Latin is probably going to catch it.
0[anonymous]11y
I am a native English speaker, but, yeah, without the Latin I probably wouldn't
have noticed.
1TimS11y
Not obvious to me. I honestly would never have made the connection.
Everyone's talking about this as if it was a hypothetical, but as far as I can tell it describes pretty accurately how hierarchical human civilizations tend to organize themselves once they hit a certain size. Isn't a divine ruler precisely someone who is more deserving and more able to absorb resources? Aren't the lower orders people who would not appreciate luxuries and indeed have fully internalized such a fact ("Not for the likes of me")
If you skip the equality requirement, it seems history is full of utilitarian societies.
And since I don't want that ability I think we are still fine. At the end of the
day I'm perfectly ok with not caring about Felix that much.
0[anonymous]11y
BTW, would anyone have a one on one chat with me about the dust speck argument?
-11Thomas11y
5wedrifid11y
Which sequence is that?
-2Thomas11y
This one. [http://lesswrong.com/lw/kn/torture_vs_dust_specks/]
I am not sure if it counts into "The Sequence", I guess it does.
The problem is the line of reasoning, where a "50 years of torture" is better
than 3^^^3 years with a dust speck in the eye every so often.
What is then the torture of all the Humanity, against the super happy Felix with
3^^^3 pyramids. Nothing. By the same line of reasoning.
9ArisKatsaris11y
That's not even the dilemma you linked to. The dilemma you linked to "one person
be horribly tortured for fifty years without hope or rest, or that 3^^^3 people
get dust specks in their eyes".
It's probably bad practice to say two lines of reasoning are the same line of
reasoning, if you don't believe in either of them.
For starters I don't need to have a positive factor for Felix's further
happiness in my utilty function. That alone is a significant difference.
Look. You have one person, under terrible torture for 50 years on one side and a gazillion of people with a slight discomfort every year or so on the other side.
It is claimed that the first is better.
Now, you have a small humanity as is, only enslaved for pyramid building for Felix. He has eons of subjective time to enjoy this pyramids and he is unbelievably happy. More happy than any man, woman or child could ever be. The amount of happiness of Felix outweights the misery of billion of people by a factor of a million.
What's the fundamental difference between those two cases? I don't see it, do you?
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren't actually the problem with Felix, the fact that there's an incentive to manipulate your own utility function that way is (among other things).)
You aren't seeing forest for the trees... the thing that is identical is that
you are trading utilities across people, which is fundamentally problematic and
leads to either tortured child [http://harelbarzilai.org/words/omelas.txt] or
utility monster, or both.
5WrongBot11y
Omelas is a goddamned paradise. Omelas without the tortured child would be
better, yeah, but Omelas as described is still better than any human
civilization that has ever existed. (For one thing, it only contains one
miserable child.)
-4Dmytry11y
Well it seems to me they are trading N dust specks vs torture in Omelas. edit:
Actually, I don't like Omelas [as example]. I think that miserable child would
only make the society way worse, with the people just opting to e.g. kill
someone when it ever so slightly results in increase in their personal expected
utility. This child in Omelas puts them straight on the slippery slope, and
making everyone aware of slippage makes people slide down for fun and profit.
Our 'civilization' though, of course, is a god damn jungle and so its pretty
damn bad. It's pretty hard to beat on the moral wrongness scale, from first
principles; you have to take our current status quo and modify it to get to
something worse (or take our earlier status quo).
2WrongBot11y
Your edit demonstrates that you really don't get consequentialism at all. Why
would making a good tradeoff (one miserable child in exchange for paradise for
everyone else) lead to making a terrible one (a tiny bit of happiness for one
person in exchange for death for someone else)?
-7Dmytry11y
0FAWS11y
This is either wrong (the utility functions of the people involved aren't
queried in the dust speck problem) or so generic as to be encompassed in the
concept of "utility calculation".
Aggregating utility functions across different people is an unsolved problem,
but not necessarily an unsolvable one. One way of avoiding utility monsters
would be to normalize utility functions. The obvious way to do that leads to
problems such as arachnophobes getting less cake even if they like cake equally
much, but IMO that's better than utility monsters.
2Dmytry11y
The utilities of many people are a vector, you are to map it to a scalar value,
that loses a lot of information in process, and it seems to me however you do
it, leads to some sort of objectionable outcomes. edit: I have a feeling one
could define it reasonably with some sort of Kolmogorov complexity like metric
that would grow incredibly slowly for the dust specks and would never equate
what ever hideously clever thing does our brain do to most of the neurons when
we suffer; the suffering beating the dust specks on the complexity (you'd have
to write down the largest number you can write down in as many bits as the bits
being tortured in the brain; then that number of dust specks starts getting to
the torture level). We need to understand how pain works before we can start
comparing pain vs dust specks.
0billswift11y
Really? Every use of utilities I have seen either uses a real world measure
(such as money) with a notation that it isn't really utilities or they go
directly for the unfalsifiable handwaving. So far I haven't seen anything to
suggest "aggregating utility functions" is even theoretically possible. For that
matter most of what I have read suggests that even an individual's "utility
function" is usually unmanageably fuzzy, or even unfalsifiable, itself.
-8Thomas11y
7Nornagest11y
Felix is essentially a Utility Monster
[http://en.wikipedia.org/wiki/Utility_monster]: a thought experiment that's been
addressed here before
[http://lesswrong.com/lw/44o/value_stability_and_aggregation/]. As that family
of examples shows, happiness-maximization breaks down rather spectacularly when
you start considering self- or other-modification or any seriously unusual
agents. You can bite that bullet, if you want, but not many people here do;
fortunately, there are a few other ways you can tackle this if you're interested
in a formalization of humanlike ethics. The "Value Stability and Aggregation"
post linked above touches on the problem, for example, as does Eliezer's Fun
Theory [http://lesswrong.com/lw/xy/the_fun_theory_sequence/] sequence.
You don't need any self-modifying or non-humanlike agents to run into problems
related to "Torture vs. Dust Specks", though; all you need is to be maximizing
over the welfare of a lot of ordinary agents. 3^^^3 is an absurdly huge number
and leads you to a correspondingly counterintuitive conclusion (one which,
incidentally, I'd estimate has led to more angry debate than anything else on
this site), but lesser versions of the same tradeoff are quite realistic; unless
you start invoking sacred vs. profane values or otherwise define the problem
away, it differs only in scale from the same utilitarian calculations you make
when, say, assigning chores.
5[anonymous]11y
In one case, (Torture to avoid the specks) the larger portion of people is
better off if you pick the single person. In the other case, (Build pyramids to
please Felix) the larger portion of people is worse off if you pick the single
person.
So if my position was "The majority should win" It would be right to torture the
person and it would be right to depose Felix.
I'm not sure if it's a fundamental difference or a good difference, but I think
that means I can lay out the following 4 distinct answer pairs:
Depose Felix, Torture Man: Majority wins.
Adore Felix, Speck people: Minority wins.
Adore Felix, Torture Man: Mean Happiness wins.
Depose Felix, Speck People: Minimum happiness wins. (Assuming either Felix is
happier about being deposed than an average person with a dust speck in their
eye, or dead, and no longer counted for minimum happiness.)
So I think I can see all 4 distinct positions, if I'm not missing something.
Edit: Fixed spacing.
1Thomas11y
Imagine that there is one tortured for 50 years and then free of any dust speck
for the next 3^^^3 years.
Then we don't have "the larger portion of people" anymore. Is anything different
in such a case?
2ArisKatsaris11y
If I understand the dilemma, in your most recent phrasing, it's this: A person
who lives 3^^^3 years either:
a) has to suffer a dustspeck per year
b) has to suffer 50 years of torture at some point in that time, then I assume
gets the memory of that torture deleted from his mind and his mind's state
restored to what it was before the torture (so that he doesn't suffer further
disutility from that memory or the broken mind-state, he only has to suffer the
torture itself), He lives the remaining 3^^^3 years dustspeck-free.
If we don't know what his own preferences are, and have no way of asking him,
what should we choose on his behalf?
But what does this have to do with Felix?
-1Thomas11y
It is argued in the said sequence, how much better is to have 1 tortured for 50
years, than 3^^^3 people having slight discomfort.
Which preferences are in question now?
3ArisKatsaris11y
Can we have one dilemma at a time, please, Thomas? You said something about
3^^^3 years -- therefore you're not talking about the dilemma as stated in the
original sequence, as that dilemma doesn't say anything about 3^^^3 years.
The preferences relating to the original dilemma, are the preferences of the
person who presumably prefers not to get tortured, vs the preferences of 3^^^3
people who presumably prefer not to get a dust speck in the eye.
1[anonymous]11y
Well, first of all, I'm assuming that you're doing that to both groupings (since
otherwise I could say "Well, one has only one person and one has a massive
number of people, which is a difference." but that seems like a trivial point)
So if you apply it to both, then it's just one person considering tradeoff A,
(pay torture to go speck free for eons)
And another person considering tradeoff B(personally build pyramids for eons to
get to live in your own collection of pyramids for some years.)
I could say that in once case the pain is relatively dense (torture, condensed
to 50 years) and the pleasure is relatively sparse,(speck free, over 3^^^3
years) and that in the other case the pain relatively sparse (slave labor,
spread out over a long time) and the pleasure is relatively dense
(Incomprehensible pyramidgasm.).
I'm not sure if that matters or in what ways that difference matters. I'm really
not up to date on how your brain handles that specifically and would probably
need to look it up further.
2Thomas11y
No. Building pyramids as humans. And enjoying them much, much longer as they
stand there, for Felix. Enjoyed by Felix.
Maybe the amount of our pleasure with Giza pyramids already exceeded the pain
invested to build them. I don't know.
Can all the pains of a slave be justified by all the pleasures of the tourist,
visiting the hole in the rock, he was forced carving for 50 years?
Or can a large group of sick sadists are entitled to slowly torture someone,
since their pleasure sum will be greater than the pain of the unlucky one?
I don't think so.
3A1987dM11y
I've heard that the labourers who made the pyramids were actually quite well
paid.
2Rhwawn11y
Was it that much pain? I read in National Geographic, IIRC, that the modern
archaeological conception was that the pyramids were mostly or entirely built by
paid labor - Nile farmers killing time during the dry season. This may even be a
good thing, depending on whether it diverted imperial tax revenue from foreign
adventurism into monument/tomb-building.
-1Thomas11y
I saw it, too. Had to use other example. Mayan or Aztec pyramids maybe.
3Rhwawn11y
Well, it's still a fun Fermi calculation problem, anyway.
Let's see, the Pyramids have been the targets of tourism since at least the
original catalogue of wonders of the ancient world, Antipater of Sidon ~140 BC
which includes "the great man-made mountains of the lofty pyramids". So that's
~2150 years of tourism (2012+140). Quickly checking, Wikipedia says 12.8 million
people visited Egypt [https://en.wikipedia.org/wiki/Tourism_in_Egypt] for
tourism in 2008, but surely not all of them visited the pyramids? Let's halve it
to 6 million.
Let's pretend Egyptian tourism followed a linear growth between 140 BC with one
visitor (Antipater) and 6 million in 2012 (yes, world population & wealth has
grown and so you'd expect tourism to grow a lot, but Egypt has been pretty
chaotic recently), over 2150 years. We can just average that to 3 million a
year, which gives us a silly total number of tourists of 2150 * 3 million or
6.45 billion visitors.
There are 138 pyramids, WP says, with the Great Pyramid estimated at 100,000
workers. Let's halve it (again with the assumptions!) at 50k workers a pyramid,
50,000 * 138 = 6.9m workers total.
This gives us the visitor:worker ratio of 6.45b:6.9m, or 21,500:23, or 934.8:1.
And of course the pyramids are still there, so whatever the real ratio, it's
getting better (modulo issues of maintenance and restoration).
1Thomas11y
Maybe those pyramids in Egypt are not so bad, after all.
But with how much tourism you can justify Aztec pyramids, where people were
slaughtered?
How many billion tourist should come to be worth to start with them all over
again?
1Rhwawn11y
You'd need a heck of a lot more tourism than for Egypt... although apparently
there's quite a range
[http://en.wikipedia.org/wiki/Human_sacrifice_in_Aztec_culture#Estimates_of_the_scope_of_the_sacrifices]
of estimates of deaths, from less than 20,000 a year to more than 200,000 a
year. Given the substantially less tourism to the Aztec pyramids (inasmuch as
apparently only 2 [http://en.wikipedia.org/wiki/List_of_Mesoamerican_pyramids]
small unimpressive Aztec pyramids survive, with all the impressive ones like
Tenochtitlan destroyed), it's safe to say that the utilitarian calculus will
never work out for them.
3Ghatanathoah11y
It seems to me that any historical event that was both painful to the
participants, and interesting to read and learn about after the fact, creates
the same dilemma that's been discussed here. Will World War Two have been a net
good if 10,000 years from now trillions of people have gotten incredible
enjoyment from watching movies, reading books, and playing videogames that
involve WWII as a setting in some way?
The first solution to this dilemma that comes to mind is that ready substitutes
exist for most of the entertainments associated with these unpleasant events. If
the Aztecs had built their pyramids and then never sacrificed anyone on them it
probably wouldn't hurt the modern tourist trade that much. And if WWII had never
happened and thus caused the Call of Duty
[http://en.wikipedia.org/wiki/Call_of_Duty] videogame franchise to never exist,
it wouldn't have a big impact on utility because some cognates of the Doom
[http://en.wikipedia.org/wiki/Doom_%28video_game%29], Unreal
[http://en.wikipedia.org/wiki/Unreal_%28series%29], and similar franchises would
still exist (those franchises are based on fictional events, so no one got hurt
inspiring them).
In fact, if I was to imagine an alternate human history where no war, slavery,
or similar conflict had ever happened, and the inhabitants got all their
enjoyment from entertainment media based on fictional conflicts, I think such a
world would have a much higher net utility than our own.
3ikrase10y
Big romances have been inspired by much smaller events, it should be noted.
0A1987dM11y
The first approximation which springs in my mind would be an exponential growth
rather than a linear one.
1gwern11y
Sure - but can you offhand fit an exponential curve and calculate its summation?
I'm sure it's doable with the specified endpoints and # of periods (just steal a
simple interest formula), but it's more work than halving and multiplying.
4A1987dM11y
Well... integral from t0 to t1 of exp(at+b) dt = (exp(at1+b)-exp(a*t2+b))/a i.e.
the difference between the endpoints times the time needed to increase by a
factor of e... a 6-million-fold increase is about 22.5 doublings (knowing 2^20 =
1 million), hence about 15 factors of e (knowing that ln 2 = 0.7) i.e. about one
in 150... hence the total number of tourists is about 1 billion (about six times
less than Rhwawn's estimate -- my eyeballs had told me it would be about one
third... close enough!)
1gwern11y
I'm actually a little surprised that his such gross approximation puts it off by
only 6x. For a Fermi estimate that's perfectly acceptable.
1ArisKatsaris11y
Being very very outraged isn't really an argument.
Give us your own (non-utilitarian I assume) decision theory that you consider
encapsulating all that is good and moral, if you please.
If you can't, please stop being outraged as those of us who try to solve the
problem, even if you feel we've taken wrong turns in the path towards the
solution.
0Cranefly11y
Found this by random clicking around, I expect no one's still reading this, but
maybe we'll catch each other via Inbox:
How about "optimize the worst case" from in game theory? It settles both the
dust speck vs. torture and the the Utility Monster Felix problems neatly.
0FeepingCreature11y
I don't know, 3^^^3 is a pretty long time to fix brain trauma. Or are you
offering complete restoration after the torture? In that case, I might just take
it.
-2Thomas11y
I am not offering anything at all. I strongly advice you NOT to substitute the
slight discomfort over long time period with a horrible torture for a shorter
period.
0ArisKatsaris11y
One fundamental difference is that I don't care about Felix's further happiness.
After some point, I may even resent it, which would make his additional
happiness of negative utility to me.
Another difference is that happiness may be best represented as a percentage
with an upper bound of e.g. 100% happy, rather than be an integer you can keep
adding to without end.
I think Felix's case may be an interesting additional scenario to consider, in
order to be sure that AIs don't fall victims to it (e.g. by creating a
superintelligence and making it super-happy, to the expense of normal human
happiness). But it's not the same scenario as the specks.
2Dmytry11y
The FAI should make a drug which will make you happy for Felix. edit: to
clarify. The two choices here are not happy naturally vs happy via wireheading.
The two choices are intense AI-induced 'natural' unhappiness, vs drug induced
happiness. It's similar to having your hand amputated, with or without
'wireheading', err, painkillers. I think it is pretty clear that if you have
someone's hand amputated, it is better if they can't feel it and see it. Be
careful with non-wireheading FAIs, 'less all surgery will be without anaesthesia
(perhaps with only the muscle relaxant).
-1ArisKatsaris11y
Cute, but that's effectively the well-known scenario of Wireheading
[http://wiki.lesswrong.com/wiki/Wireheading] where the complexity of human value
is replaced by mere 'happiness'.
2Dmytry11y
Well, in some sense, achieving happiness by anything other than reproduction, is
already wireheading. Doesn't need to be with a wire; what if I make a video
which evokes intense feeling of pleasure? How far you can go before it is a mind
hack?
edit: actually, I think the AI could raise people to be very empathetic for
Felix, and very happy for him. Is it not good to raise your kids so that they
can be happy in the world the way it is (when they can't change anything anyway)
?
3roystgnr11y
"achieving happiness by anything other than [subgoals of] reproduction" is
wireheading from the perspective of my genes, and if they want to object I'm not
stopping them. Happiness via drugs is wireheading from the perspective of me,
and I object myself.
0Dmytry11y
What if there's double rainbow
[http://www.funnyordie.com/videos/dcf83410c7/insane-double-rainbow-guy] ? What
if you have lower than 'normal' level of some neurotransmitter and
under-appreciate the double rainbow without drugs? What if higher than 'normal'?
I'm not advocating drugs, by the way, just pointing out the difficulty in making
any binary distinction here. The natural happy should be preferred to
wire-headed happy, but the society does think that some people should take
anti-depressants. If you are to labour in the name of the utility monster
anyway, you could as well be happy. You object to happiness via drug as
substitute for happiness without drugs, but if the happiness without drugs is
not going to happen - then what?
2ArisKatsaris11y
No. This reduces the words to the point of meaninglessness. Human beings have
values other than reproduction, values that make them happy when satisfied -
art, pride, personal achievement, understanding, etc. Wireheading is about being
made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone
else suffered. Now you're posing a scenario where everyone is happy, but they're
made happy by having their values rewritten to place extremelty value on Felix's
happiness instead.
At this point, I hope we're not pretending it's the same scenario with only
minor modifications, right? Your scenario is about the AI rewriting our values,
it's not about trading our collective suffering for Felix's happiness.
Your scenario can effectively remove the person of Felix from the situation
altogether, and the AI could just make us all very happy that the laws of
physics keep on working.
2Dmytry11y
You say art... what if I am a musician and I am making a song? That's good,
right? What if I get 100 experimental subjects to sit in MRI, as they listen to
test music, and using my intelligence and some software tools, make very
pleasurable song? What if I know that it works by activating such and such
connections here and there which end up activating the reward system? What if I
don't use MRI, but use internal data available in my own brain, to achieve same
result?
I know that this is arriving at meaninglessness, I just don't see it as reducing
the words anywhere; the words already only seem meaningful in the context of
limited depth of inference, but it all falls apart if you are to make more steps
(like an axiomatic system that leads to self contradiction). Making people happy
[as terminal goal], this way or that, just leads to some form of really
objectionable behaviour if done by something more intelligent than human.
-1ArisKatsaris11y
Be specific about what you are asking, please. What does the "what if" mean
here? Whether these thing should be considered good? Whether such things should
be considered "wireheading"? Whether we want an AI to do such things? What?
This claim doesn't seem to make much sense to me. I've already been made
non-objectionably happy by people more intelligent than me from time to time. My
parents, when I was child. Good writers and funny entertainers, as an adult. How
does it become authomatically "really objectionable" if it's "something more
intelligent than human" as opposed to "something more intelligent than you,
personally?"
2Dmytry11y
I'm trying to make you think a little deeper about your distinction between
wireheading and non-wireheading. The point is that your choice of the dividing
line is entirely arbitrary (and most people don't agree where to put dividing
line). I don't know where you put the dividing line, and frankly I don't care; i
just want you to realize that you're drawing arbitrary line on the beach, to the
left of it is the land, to the right is the ocean. edit: That's how maps work,
not how territory works, btw.
I'd say, they had a goal to achieve something other than happiness , and the
happiness was incidental.
0ArisKatsaris11y
Don't assume you know how deeply I think about it. The only thing I've
effectively communicated to you so far that I consider it ludicrous to say that
"achieving happiness by anything other than reproduction, is already
wireheading"
We can agree Yes/No, that this discussion doesn't have much of anything to do
with the Felix scenario, right? Please answer this question.
Perhaps people don't have to agree, and the people whose coherent extrapolated
volition allows a situation "W" to be done to them, should so have it done to
them, regardless of whether you label W to be 'wireheading' or 'wellbeing'.
Or perhaps not. After all, it's not as if I ever declared Friendliness to be a
solved problem, so I don't know why you keep talking to me as if I claimed it's
easy to arrive at a conclusion.
2Dmytry11y
"Whether such things should be considered "wireheading"?" is what i want you to
consider, yes.
I don't have a binary classifier, absolute wireheading vs non-wireheading. I
have the wireheadedness quantity. Connecting a wire straight into your pleasure
centre will have wireheadedness of (very close to) 1, reproduction (maximization
of expected number of each gene) will have wireheadedness of 0, taking heroin
will be close to 1, taking LSD will be lower, the wireheadedness of the art
varies depending on how much of your brain is involved in making pleasure out of
art (how much involved is the art), and perhaps to how much of a hack the art
is, though ultimately all of art is to greater or lesser extent a hack. edit:
and i am actually earning my living sort of making art (i make CGI software, but
also do CGI myself).
I don't consider the low wireheadedness to be necessarily good. That's the
christian moral connotations, which I do not share as an atheist grown in non
religious family.
0dugancm11y
Happiness, as a state of mind in humans, seems less to me about how strong the
"orgasms" are than how frequently they occur without lessening the probability
they will continue to occur. So what problems might there be with maximizing
total future happy seconds experienced in humans, including emulations thereof
(other than describing with sufficient accuracy the concepts of 'human' and
'happiness' to a computer)?
I think doing so would extrapolate to increasing population and longevity to
within resource constraints and diminishing returns on improving average
happiness uptime and existential risk mitigation, which seem to me to be the
crux of people's intuitions about the Felix and Wireheading problems.
7wedrifid11y
It's hedonistic total-utilitarianism vs preference based consequentialism.
That's a big difference. Not only would the 'sequence' you reject not advocate
preferring to torture humanity for the sake of making Felix superhappy, even in
the absence of negative externalities it would still consider that sort of
'happiness' production a bad thing even for Felix.
In case anyone is unfamiliar with the concept: Utility Monster.
It's a total-utility maximising AI.
if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there'd be some serious population 'adjustment'.
Go classical - 'lotus-eating'.
Why are you wasting your time on-line? Felix wants more pyramids.
Chain gangs strike me as sub-optimal for building pyramids or total happiness.
Clearly, Felix prefers pyramids built by chain-gangs.
It's a GOOD life.
Felix means happy (or lucky), and is the origin of the word felicity. It took me a while to realize this, so I thought I would note it. Is it obvious for all native English speakers?
The latest SMBC comic is now an illustrated children's story which more or less brings up parallel thoughts to Cynical about Cynicism.
Everyone's talking about this as if it was a hypothetical, but as far as I can tell it describes pretty accurately how hierarchical human civilizations tend to organize themselves once they hit a certain size. Isn't a divine ruler precisely someone who is more deserving and more able to absorb resources? Aren't the lower orders people who would not appreciate luxuries and indeed have fully internalized such a fact ("Not for the likes of me")
If you skip the equality requirement, it seems history is full of utilitarian societies.
Another good one on Ethics
Felix is 3^^^3 units happy. And no dust speck in his eyes. What is torturing millions for this noble goal?
I, of course, reject that "sequence" which preaches exactly this.
That's because your brain doesn't have the ability to imagine just how happy Felix is and fails to weigh his actual happiness against humanity's.
Look. You have one person, under terrible torture for 50 years on one side and a gazillion of people with a slight discomfort every year or so on the other side.
It is claimed that the first is better.
Now, you have a small humanity as is, only enslaved for pyramid building for Felix. He has eons of subjective time to enjoy this pyramids and he is unbelievably happy. More happy than any man, woman or child could ever be. The amount of happiness of Felix outweights the misery of billion of people by a factor of a million.
What's the fundamental difference between those two cases? I don't see it, do you?
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren't actually the problem with Felix, the fact that there's an incentive to manipulate your own utility function that way is (among other things).)