To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"
Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
Devastating news, to be sure—and no, I am not telling you this in real life. But suppose I did tell it to you. Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place. Suppose I proved that all utilities equaled zero.
I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.
Would you still tip cabdrivers? Would you cheat on your Significant Other? If a child lay fainted on the train tracks, would you still drag them off?
Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?
Would you wear black and write gloomy poetry and denounce all altruists as fools? But there's no reason you should do that—it's just a cached thought.
Would you stay in bed because there was no reason to get up? What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?
Would you go on reading Overcoming Bias, and if not, what would you read instead? Would you still try to be rational, and if not, what would you think instead?
Close your eyes, take as long as necessary to answer:
What would you do, if nothing were right?
Did you convinve me that nothing is morally right, or that all utilities are 0.
If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.
If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.
If, 'convincing' includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).
Ugh, sorry about the typos, I am commenting from a cell phone, and have clumsy thumbs.
First, can you clarify what you mean by "everything is permissible and nothing is forbidden"?
In my familiar world, "permissible" and "forbidden" refer to certain expected consequences. I can still choose to murder, or cheat, blaspheme, neglect to earn a living, etc; they're only forbidden in the sense of not wanting to experience the consequences.
Are you suggesting I imagine that the consequences would be different or nonexistent? Or that I would no longer have a preference about consequences? Or something else?
"Morality" generally refers to guidelines on one of two things:
(1). Doing good to other sentients.
(2). Ensuring that the future is nice.
If you wanted to make me stop caring about (1), you could convince me that all other sentients were computer simulations who were different in kind than I was, and that there emotions were simulated according to sophisticated computer models. In that case, I would probably continue to treat sentients as peers, because things would be a lot more boring if I started thinking of them as mere NPCs.
If you wanted to ... (read more)
Well I've argued that shoulds are overrated, that wants are enough. I really can't imagine you convincing me that I don't want anything more than anything else.
I'd do everything that I do now. Moral realism demolished.
"Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden."
First Existential Crisis: Age 15
"Would you wear black and write gloomy poetry and denounce all altruists as fools?"
Been there, done that.
"But there's no reason you should do that - it's just a cached thought."
"Would you stay in bed because there was no reason to get up?"
"What about when you finally got hungry and stumbled into the kitchen - what would you do after you were done eating?"
Stare at the wall.
"Would you go on reading Overcoming Bias, and if not, what would you read instead?"
"Would you still try to be rational, and if not, what would you think instead"
No-- Came up with entire philosophy of "It doesn't matter if anything I say, do, or think is consistent with itself or each other... everything in my head has been set up by the universe- my parents ideas of right and wrong- television- paternalistic hopes of approving/forgiving/nonexistent god and his ability to grant immortality, so why should I worry about trying to put it together in any kind of sensible fashion? Let it all sort itself out...
"What would you do, if nothing were right?"
What felt best.
Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good ... (read more)
I guess logically I would have to do nothing, since there would be no logical basis to perform any action. This would of course be fatal after a few days, since staying alive requires action.
(I want to emphasize this is just a hypothetical answer to a hypothetical question - I would never really just sit down and wait to die.)
If it's not what you would really do, you're not answering the question.
I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.
So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.
Suppose I proved that all utilities equaled zero.
If I still feel hunger then food has an utility > 0. If I don't feel anything anymore, then I wouldn't care about anything.
So our morality is defined by our emotions. The decisions I make are a tradeoff. Do I tip the waiter? Depends on my financial situation and if I'm willing to endure the awkwardness of breaking a social convention. Yes, I've often eaten wit... (read more)
I have thought on this, and concluded that I would do nothing different. Nothing at all. I do not base my actions on what I believe to be "right" in the abstract, but upon whether I like the consequences that I forecast. The only thing that could and would change my actions is more courage.
Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)
One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.
The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.
Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.
I am already fairly well convinced of this; I am hoping against hope you have something up your sleeve to change my mind.
I had this revelation sometime back. I tried living without meaning for a week, and it turn out that not a whole lot changed. Oops?
Like many others here, I don't believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.
In general, I'd go back to being an amoralist.
My-Moral-Philosophy is either as true as 2+2=4 or as true as 2+2=5, I'm not sure. or 0.0001*1>0.
If it is wrong, then it's still decent as philosophy goes, and I just won't try to use math to talk about it. Though I'd probably think more about another system I looked at, because it seems like more fun.
But just because it's what a primate wants doesn't mean it's the right answer.
@Ian C and Tiiba:
Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational cho... (read more)
Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.
But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.
For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you... (read more)
I don't know to what extent my moral philosophy affects my behavior vs. being rationalization of what I would want to want anyway. Ignoring existential despair (I think I've gotten that out of my system, hopefully permanently) I would probably act a little more selfish, although the apparently rational thing for me to do given even total selfishness and no empathy (at least with a low discount rate and maybe a liberal definition of "self") is not very different from the apparently rational thing given my current morality.
I know that random behavior requires choices. The machine IS choosing - but because all choices are equal, the result of "max(actionList)" is implementation-dependent. "Shut down OS" is in that list, too, but "make no choice whatsoever" simply doesn't belong there.
Isn't this the movie Groundhog Day, but with certain knowledge that the world will reset daily forever? No happy ending.
I'd just get really, really bored. Studying something (learning the piano, as he does in the movie) would be the only open-ended thing you could do. Otherwise, you'd be living forever with the same set of people, and the same more-or-less limited set of possibilities.
Since my current moral system is pretty selfish and involves me doing altruistic things to make me happy, I wouldn't change a thing. At first glance it might appear that my actions should be more shortsighted since my long-term goals wouldn't matter, but my short-term goals and happiness wouldn't matter just as much. Is this thought exercise another thing that just all adds up to normality?
'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'
Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.
'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc
Well, to start with I'd keep on doing the same thing. Just like I do if I discover that I really live in a timeless MWI platonia that is fundamentally different to what the world intuitively seems like.
But over time? Then the answer is less clear to me. Sometimes I learn things that firstly affect my world view in the abstract, then the way I personally relate to things, and finally my actions.
For example, evolution and the existence of carnivores. As I child I'd see something like a hawk tearing the wings off a little baby bird. I'd think that the ha... (read more)
I'd behave exactly the same as I do now.
What is morality anyway? It is simply intuitive game theory, that is, it's a mechanism that evolved in humans to allow them to deal with an environment where conspecifics are both potential competitors and co-operators. The only ways you could persuade me that "nothing is moral" would be (1) by killing all humans except me, or (2) by surgically removing the parts of my brain that process moral reasoning.
Eliezer, I've got a whole set of plans ready to roll, just waiting on your word that the final Proof is ready. It's going to be bloody wicked... and just plain bloody, hehe.
Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid. But of course nobody wants to publicly admit that they'd personally do more cheating, stealing, and murdering. So everyone is just responding with variants of "Of course I wouldn't do anything different. No sir, not me!"
Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)
The post says "when you finally got hungry [...] what would you do after you were done eating?", which I take to understand that I still have desire and reason to eat. But it also asks me to imagine a proof that all utilities are zero, which confuses me because when I'm hungry, I expect a form of utility (not being hungry, which is better than being hungry) from eating. I'm probably confused on this point in some manner, though, so I'll try to answer the question the way I understand it, which is that the more abstracted/cultural/etc utilities ar... (read more)
I hope I'd hold the courage of my convictions enough to commit suicide quickly. You would have destroyed my world, so best to take myself out completely.
I believe that "nothing is right or wrong", but that doesn't affect my choices much. There is nothing inconsistent with that.
It's pretty evident to me that if you convinced me (you can't, you'd have to rewire my brain and suppress a handful of hormonal feedbacks - but suppose you did) that all utilities were 0, I'd be dead in about as long as total neglect will kill a body - a couple of days for thirst, perhaps. And in the meantime I'd be clinically comatose. No motive implies no action.
It's like asking how our world would be if "2 + 2 = 5." My answer to that would be, "but it doesn't."
So unless you can convince me that one can exist without morality, then my answer is, "but we can't exist without morality."
I suspect I am misunderstanding your question in at least a couple of different ways. Could you clarify?
I think I already believe that there's no right and wrong, and my response is to largely continue pretending that there is because it makes things easier (alternatively, I've chosen to live my life by a certain set standards, which happen to coincide with at least some versions of what others call morality --- I just don't call them "moral"). But the fact that you seem to equate proving the absence of morality with proving all utilities are zer... (read more)
Wow, there are a lot of nihilists here.
I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.
It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.
Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or fal... (read more)
I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.
I think my behavior would be driven by needs alone. However, I have some doubts. Say I needed money and decided to steal. If the person I stole from needed the money more than I did and ended up hurting as a result, with or without a doctrine of wrong & right, wouldn't I still feel bad for causing someone else pain? Would I not therefore refrain from stealing from that person? Or are you saying that I would no longer react emotionally to the consequences of my actions? Are my feelings a result of a learned moral doctrine or something else?
I'd do everything I do now. You can't escape your own psychology and I've already expressed my skepticism about the efficacy of moral deliberation. I'll go further and say that nobody would act any differently. Sure, after you shout in from the rooftops, maybe there will be an upsurge in crime and the demand for black nail polish for a month or so but when the dust settled nothing would have changed. People would still cringe at the sight of blood and still react to the pain of others just as they react to their own pain. People would still experience guil... (read more)
Inform the other person that they didn't know what they were talking about.
Nothing is right, you say? What a very curious position to take.
Does the fact that I'd do absolutely nothing differently mean that I'm already a nihilist?
There is no rationally justifiable way that any rational being "should" act.
How do you know?
A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would ... (read more)
There are different ways of understanding that. To clarify, let's transplant the thought experiment. Suppose you learned that there are no elephants. This could mean various things. Two things it might mean:
1) That there are no big mammals with trunks. If you see what you once thought was an elephant stampeding in your direction, if you stay still nothing will happen to you because it is not really there. If yo... (read more)
If I were actually convinced that there is no right or wrong (very unlikely), I would probably do everything I could to keep the secret from getting out.
Even if there is no morality, my continued existence relies on everyone else believing that there is one, so that they continue to behave altruistically towards me.
A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or th... (read more)
Dynamically Linked: I suspect you have completely misrepresented the intentions of at least most of those who said they wouldn't do anything differently. Are you just trying to make a cynical joke?
I would play a bunch of video games -- not necessarily Second Life, but just anything to keep my mind occupied during the day. I would try to join some sort of recreational sports league, and I would find a job that paid me just enough money to solicit a regular supply of prostitutes.
Dynamically Linked said:
Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid.
That's not a safe guess at all. And in fact, is likely wrong.
You observe that (most?) moral philosophies suggest your list of sins are "wrong". But then you guess that people tend not to do these things because the moral philosophies say they are wron... (read more)
I find this question kind of funny. I already feel that "that everything is permissible and nothing is forbidden", and it isn't DEVASTATING in the least; it's liberating. I already commented in this under "Heading Towards Morality". Morals are just opinions, and justification is irrelevant. I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex. I think it can be jarring, certainly, to make the transition to such extreme relativism, but I would not call it devastating, necessarily.
The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.
Another perspective on the meaning of morality:
On one had there is morality as "those things which I want." I would join a lot of people here in saying that I think that what I want is arbitrary in that it was caused by some combination of my nature and nurture, rather than being in any fundamental way a product of my rationality. At the same time I can't deny that my morality is real, or that it governs my behavior. This is why I would call myself a moral skeptic, along the lines of Hume, rather than a nihilist. I also couldn't become an ego... (read more)
Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.
Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.
Once again, I most certainly would act differently if I thought that nothi... (read more)
Unknown: I don't think that it is morally wrong to accuse people of lying. I think it detracts from the conversation. I want the quality of the conversation to be higher, in my own estimation, therefore I object to commenters accusing others of lying. Not having a moral code does not imply that one need be perfectly fine with the world devolving into a wacky funhouse. Anything that I restrain myself from doing, would be for an aversion to its consequences, including both consequences to me and to others. I agree with you about the fallacy of projecting, and it runs both ways.
Pablo- I have not yet resolved whether I should care about creating the 'positive' singularity for or more or less this reason. Why should I, the person I am now, care about the persistence of some completely different, incomprehensible, and unsympathetic form of 'myself' that will immediately take over a few nanoseconds after it has begun... I kind of like who I am now. We die each moment and each we are reborn- why should literal death be so abhorrent? Esp. if you think you can look at the universe from outside time as if it were just another dimension of space and see all fixed in some odd sense...
.I cannot imagine myself without morality because that wouldn't be me, but another brain.
Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.Morality is not a ghost in the machine, but it is defined by the machine itself.
Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.
Morality is not a ghost in the machine, but it is defined by the machine itself.
I'd stop being a vegetarian. Wait; I'm not a vegetarian. (Are there no vegetarians on OvBias?) But I'd stop feeling guilty about it.
I'd stop doing volunteer work and dona... (read more)
The way I frame this question is "what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a ... (read more)
Wow- far too much self-realization going on here... Just to provide a data point, when I was in high school, I convinced an awkward, naive, young catholic boy who had a crush on me of just this point... He attempted suicide that day.
For follow up, he has been in a very happy exclusive homosexual relationship for the past three years.
Maybe I didn't do such a bad thing...
Eliezer, if I lose all my goals, I do nothing. If I lose just the moral goals, I begin using previously immoral means to reach my other goals. (It has happened several times in my life.) But your explaining won't be enough to take away my moral goals. Morality is desire conditioned by examples in childhood, not hard logic following from first principles. De-conditioning requires high stress, some really bad experience, and the older you get, the more punishment you need to change your ways.
Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born. There's a dead little girl in every old woman.
"Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)"
How did you arrive at this conclusion? I said that discovering that all actions in life were worthless might eventually affect my behaviour. Via some leap in reasoning you arrive at the above. Care to explain this to me?
My guess is that if I knew that all actions were worthless I might eventually stop doing anything. After all, if there's no point in doing anything, why bother?
Are there no vegetarians on OvBias?
Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.
I guess I don't properly understand the question. I don't know what "nothing is moral and nothing is right" means. To me, morality appears to be an internal thing, not something imposed from the outside: it's inextricably bound up with my desires and motives and thoughts, and with everyone else's. So how can you remove morality without changing the desires and motives and thoughts so that I would no longer recognise them as anything to do with me, or removing ... (read more)
Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral". My point is, we can't trust the answers people give, because they want to believe, or want others to believe, that they are naturally good, that they don't need moral philosophies to tell them not to cheat, steal, or murder.
BTW, Eliezer, I got the "enemies list" you sent last night. Rest assured, my robot army will target them with the highest priority. Now stop worrying, and finish that damn proof already!
Dynamically: It appears that you have a fixed preconception of what behavior "human nature" requires, and you will not accept answers that don't adhere to that preconception.
A human being will never be able to discard all concepts of morality. In a world without utility differences, a state of existence (living) and a state of non-existence (death) are equivalent. But we can't choose both at the same time.
I'd assume the proof was faulty, even if I couldn't spot the flaw.
On the topic of vegetarianism, I originally became a vegetarian 15 years ago because I thought it was "wrong" to cause unnecessary pain and suffering of conscious beings, but I am still a vegetarian even though I no longer think it is "wrong" (in anything like the ordinary sense).
Now that I no longer think that the concept of "morality" makes much sense at all (except as a fancy and unnecessary name for certain evolved tendencies that are purely a result of what worked for my ancestors in their environments (as they have expre... (read more)
It's hard for me to figure out what the question means.
I feel sad when I think that the universe is bound to wind down into nothingness, forever. (Tho, as someone pointed out, this future infinity of nothingness is no worse than the past infinity of nothingness, which for some reason doesn't bother me as much.) Is this morality?
When I watch a movie, I hope that the good guys win. Is that morality? Would I be unable to enjoy anything other than "My Dinner with Andre" after incorporating the proof that there was no morality? Does having empathi... (read more)
For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?
BTW, I found an astonishing definition of morality in the President's Council on Bioethics 2005 "Alternative sources of human pluripotent stem cells: A white paper", in the section on altered nuclear transfer. They argued that ANT may be immoral, because it is immoral to allow a woman to undergo a dangerous procedure (egg extraction) for someone else's benefit. In other words, it is immoral to allow someone else to be moral.
This means that the moral thing to do, is to altruistically use your time+money getting laws passed to forbid other people to be moral. The moral thing for them to do, of course, is to prevent you from wasting your time doing this.
Unknown: of course it would make a difference, just as my behavior would be different if I had billions of dollars rather than next to nothing or if I were immortal rather than mortal. It doesn't have anything to do with "morality" though.
For example, if I had the power of invisibility (and immateriality) and were able to plant a listening device in the oval office with no chance of getting caught, I would do it in order to publicly expose the lies and manipulations of the Bush administration and give proof of the willful stupidity and rampant di... (read more)
To tell the truth, I expected more when I first heard of this blog.
You pose this question as if morality is a purely intellectual construct. I do what I do not because it's moral or immoral, but because I think of the consequences. For example, if I only held myself from killing people because my religion told me so, and I was suddenly reassured by it that killing was all right, I could still figure out that going out and harming others wouldn't keep me unharmed for long.
"What would you do, if nothing were right?"
Unless I desired to try to live in a world where I knew nothing were right, I might die of mortal dehydration or mortal starvation, one of which might result from my inaction. After all, it takes more resources and bodily effort to live than it does to die. Then again, it might take more psychological effort to allow myself to die of inaction than it would take bodily effort to try to live. Or it might take more effort to try to not desire to live than it would to just try to live. But then ag... (read more)
I would expect that people would probably expect or even demand more justification, but I don't think that the icy unfeeling mechanisms of the universe would sense significance in certain sentiments but not others; it would be a strange culture that thought nothing of murder but scrutinized everyone's personal pie preferences, but I don't see that as entirely impossible.
Sorry, I misread the post, I meant to address my response to Phil.
I very much look forward to posts from Eliezer regarding whether the responses seen in this thread are in line with what he was expecting.
Sure. I could get away with doing all sorts of things. No doubt the initial novelty and power rush would cause me to do some things that would be quite perverted and that I'd feel guilty about. I don't think that's the same as a world without morality though. You seem to view morality as a constraint whereas I view it as a folk theory that describes a subset of human behavior. (I take Eliezer to mean that we're rejecting morality at an intellectual level rather than rewiring our brains.)
Since that's already what I believe, it wouldn't be a change at all. I must admit though that I didn't tip even when I believed in God, but I was different in a number of ways.
I think the world would change on the margin and that Voltaire was right when he warned of the servants stealing the silverware. The servants might also change their behavior in more desirable ways, but I don't know whether I'd prefer it on net and as it doesn't seem like a likely possibility in the foreseeable future I am content to be ignorant.
All: I'm really disappointed that no-one else seems to have found my "after the FAI does nothing" frame useful for making sense of this post. Is anyone interested in responding to that version? It seems so much more interesting and complete than the three versions E.C. Hopkins gave.
Dynamically: My "moral philosophy" if you insist on using that term (model of a recipe for generating a utility function considered desirable by certain optimizers in my brain would be a better term) is the main thing that HAS told me to steal, cheat, an... (read more)
Michael Vassar, I read that and laughed and said, "Oh, great, now I've got to play the thought experiment again in this new version."
Albeit I would postulate that on every occasion, the FAI underwent the water-flowing-downhill automatic shutdown that was automatically enginereed into it, with the stop code "desirability differentials vanished".
The responses that occurred to me - and yes, I had to think about it for a while - would be as follows:
*) Peek at the code. Figure out what happened. Go on from there.
Assuming we don't allow th... (read more)
I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.
If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desi... (read more)
Michael- I have repeatedly failed to understand why this upsets you so much, though it clearly does. It's hard for me to see why I should care if the AI does a pretty fireworks display for 10 seconds or 10,000 years. Perhaps you need to find more intuitive ways of explaining it. A better analogy? At some points you just seem like a mystic to me...
Also Mike- the first portion of your argument was written in such a confusing manner that I had to read it twice, and I know the way you argue... don't know if anyone who didn't already know what you were talking about would have kept reading.
I'm still trying to understand what Eliezer really means by this question. Here is a list of a few reasons why I don't kill the annoying kid across the street. Which of these reasons might disappear upon my being shown this proof?
1. The kid and his friends and family would suffer, and since I don't enjoy suffering myself, my ability to empathise stops me wanting to.
2. I would probably be arrested and jailed, which doesn't fit in with my plans.
3. I have an emotional reaction to the idea of killing a kid (in such circumstances -- though I'm not actuall... (read more)
This is a spectacularly ill-posed question. For one thing, it seems to blur the distinction between morality and values in general, by asking such questions like "Would you stay in bed because there was no reason to get up?" What does that have to do with morality?
When you get rid of a sense of values, the result is clinical depression (and generally, a non-functional person). When you get rid of a sense of morality, the result is a psychopath. Psychopaths, unlike the depressed, are quite functional.
So the question reduces to, what would yo... (read more)
mtraven: many of the posters in this thread -- myself included -- have said that they don't believe in morality (meaning morality and not "values" or "motivation"), and yet I very highly doubt that many of us are clinically psychopaths.
Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.
Not having read the other comments, I'd say Eliezer is being tedious.
I'd do whatever the hell I want, which is what I am already doing.
mtraven: "Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality."
Exactly. Logically, I can agree entirely with Marquis de Sade, and yet when reading Juliette, my stomach turns around about page 300, and I just can't read any more about the raping and the burning and the torture.
It... (read more)
michael vassar: I meant "horrible" from my current perspective, much like I would view that future me as psychopathic and immoral. (It wouldn't, or if it did, it would consider them meaningless labels.)
Dynamically Linked: I'm using my real name and I think I'd do things that I (and most of the people I know) currently consider immoral. I'm not sure about using "admit" to describe it, thought, as I don't consider it a dark secret. I have a certain utility function which has a negative valuation of a hypothetical future self without the s... (read more)
Unknown: "For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?"
What sort of stupid question is this? :-) But of course! If I gave you a billion dollars, would it make any difference to your behavior? :-)
I am not a moral realist, thus I imagine my behaviour wouldn't change all that much.
My motivation to act one way or the other in any situation is based on a few things: my sense of rightness or wrongness, though other factors may override them (thirst, hunger, lust, etc), not on whether or not the act is "truly" right - I'm not sure what that would mean. I am skeptical of rightness being a property of certain acts in the world; I have not seen convincing evidence of their existence.
I nonetheless have this sense of right and wrong that I think about often, and revise according to other things I value (logical consistency being the most significant one, I think).
It depends on how you disproved my morality.
As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.
But I don't think you could disprove the urge to care itself, since it's an urge and does... (read more)
What would I do?
I'd make a like a typical nihilistic postmodernist and adopt the leftist modus operandi of decrying the truth and moral content of everyone's arguments except my own.
Morality is not a set of beliefs; it's part of the basic innate functionality of the human brain. So you can't "disprove" it any more than you can disprove balance, or grammar.
I agree with mtraven's last post that morality is an innate functionality of the human brain that can't be "disproved", and yet I have said again and again that I don't believe in morality, so let me explain.
Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!
A lot of confu... (read more)
Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral".
What tradition? Immoral at what time? Given several randomly-chosen traditional moral systems, I'm fairly sure we could demonstrate that any one of us is not only willing to admit to violating at least one of them, but actually proud of that fact.
You lot are like Lovecraft, gibbering at the thought of strange geometries, while all along the bees continue building their hexagonal cells.
To use Eliezer's terminology, you seem to be saying that "morality" is a 2-place word:
Morality: Species, Act -> [0, â)
which can be "curried", i.e. can "eat" the first input to become a 1-place word:
Homosapiens::Morality == Morality_93745
When faced with any choice, I'd try and figure out my most promising options, then trace them out into their different probable futures, being sure to include such factors as an action's psychological effect on the agent. Then I'd evaluate how much I prefer these futures, acknowledging that I privilege my own future (and the futures of people I'm close to) above others (but not unconditionally), and taking care not to be shortsighted. Then I'd try to choose what seems best under those criteria, applied as rationally as I'm capable of.
You know, the sort of thing that we all do anyway, but often without letting our conscious minds realize it, and thus often with some characteristic errors mixed in.
Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true speci... (read more)
My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.
I don't think you could disprove the urge to care for other people, because urges don't have truth-values.
The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.
I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history
Are you sure about the traditional notions? I don't see how you can base that on how we have actually behaved visavis morality. We've been partially put to the test of whether we consider morality universally applicable, and the result so far is that we apply our moral judgments to other ... (read more)
Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean "notion" among thinkers, not among average people who haven't given much though to such things.) Your second paragraph isn't in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.
It's a differe... (read more)
I became a convinced of moral Anti Realism by Joshua Greene and Richard Joyce. Took me about a year to get over it. So, not a casual nihilist. And no, arguments that one should be rational have no normative force either, as far as I can see. The only argument for rationality would be a moral one. Anyway, I became a consequentialist like Greene suggested....
I'd think Eliezer was funnin' me. Whenever any committed empiricist purports to have a proof of any claim beginning with "There are no X such that..." or "For all X..." I know he's either drunk or kidding.
If it seemed that Eliezer actually believed his conclusion, I'd avoid leaving my wallet within his reach.
All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is.
But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic,... (read more)
But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.
What I am really saying is that the notion of "morality" is so hopelessly contaminated with notions... (read more)
Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?
either I would become incapable of any action or choice, or I wouldn't change at all, or I would give up the abstract goals and gradually reclaim the concrete ones.
I'd like to put forth the idea that there is a mental condition for this : sociopathy. It affects around 4% of the population. Dr. Martha Stout has a good insight as to how the world works if you are amoral: http://www.cix.co.uk/~klockstone/spath.htm
What would I do if you destroyed my moral philosophy?
Well, empathy for others is built into me (and all other non-psychopaths) whether I like it or not. It isn't really affected by propositions. So not much would really change. Proving that moral truths didn't exist would free us all up to act "however we like," but I can still pigheadedly "like" to be nice.
What did you mean by "all utilities are 0"?
To be perfectly honest, if I had my morality stripped away, and I thought could get away with it, I'd rape as many women as possible.
Not joking; my tastes already run towards domination and BDSM and the like, and without morality, there'd be no reason to hold back for fear of traumatizing my partners, other than the fear of the government punishing me for doing so.
Well, I already think the universe and human existence is literally pointless because we just happened. Nothing you do has an intrinsic point and you are going to die[*]. (Also, this is intrinsically hilarious.)
So I expect I'll keep on doing what I'm doing, which is trying to work out what I actually want. This is a question that has lasted me quite a few years so far.
So far I haven't lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I h... (read more)
For me, utility is just a metaphor I use for expressing how much I value different world-states and thus what importance I give to helping them come into existence (or, in the case of world-states with negative utilities, what importance I give to preventing them from coming into existence.) You couldn't prove that these equaled zero because it's a purely subjective measurement.
Thus, after a bout of laughter, I would inform you of this, and probably give you some kind of pep talk so you didn't go emo and be destructive while you rebuilt your utility system, if you hadn't already.
Then, I would live life as I had before, hoping to eliminate a whole lot of suffering.
I don't understand this post. Asking me to imagine that all utilities equal zero is like asking to imagine being a philosophical zombie. I'd do exactly the same as before of course.
Imagining a state wherein all utilities are 0 is somewhat difficult for me... as I hold to a primarily egoistic morality, rather than a utilitarian one. Things primarily have utility in that they are useful to me, and that's not a state of affairs that can be stripped from me by some moral argument.
The only circumstance that I can conceive of that could actually void my morality like that would be the combination of certain knowledge of my imminent demise, formed in such away as to deny any transhuman escape clause. Such a case might go something like,... (read more)
I once asked a friend a similar question. His answer was, "Everything."
If heaven and Earth, despoiled of its august stamp could ever cease to manifest it, if Morality didn't exist, it would be necessary to invent it. Let the wise proclaim it, and kings fear it.
A nice hypothetical. If people are divorced from ideological "shoulds", they will quickly find that they still have drives and preferences that operate a lot like them.
It's interesting to follow the argument, and see where you are going with this. So far, so good, but I expect I'll be disappointed in the end. Only the day after tomorrow belongs to me.
That is a sufficiently large light switch. Flipping it has an influence on my mind far greater than the thermal noise at 293K.
As far as I am aware, I am not a separate fact from my morality. I am perhaps instead a result of it. In any event, the mind I have now returns a null value when I ask it to dereference "Me_Without_A_Morality". It certainly doesn't return a model of a mind, good, evil, or somehow neither, which I might emulate for a few steps to consider what it would do.
I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?
I would follow my emotional sentiments only, instead of rational moral arguments, for deciding my wants. I would still put a small degree of effort into being rational in order to achieve them,
nothing is moral and nothing is right;
everything is permissible and nothing is forbidden.
nothing is moral and nothing is right;
everything is permissible and nothing is forbidden.
While these are equivalent (a utility function that always evaluates to 0 is equivalent to one that always evaluates to 1, yada yada yada), they “feel” opposite to me: “nothing is moral and nothing is right” would have the connotations of “nothing is permissible and everything forbidden”, and “everything is permissible and nothing is forbidden” would have the connotations of “everything is moral and everything is right”, or “nothing is immoral and nothing is wrong”.
When I attempt to picture myself in a state of 'no moral wrongs', I get myself as I am. Largely, I don't act morally out of a sense of rightness, but out of enlightened self-interest. If I think I will not be caught, I act basically according to whim.
If you successfully convinced me that there was no morality, I wouldn't rationally choose to do anything, I'd just sit there, since I wouldn't believe that I should do anything. I'd probably still meet my basic bodily needs when they became sufficiently demanding, since I wouldn't suppress them (I'd have no reason to), but beyond that, I'd do nothing.
There are several things wrong with this post. Firstly, I'm sure different people would react to being convinced their moral philosophy was wrong in different ways. Some might wail and scream and commit suicide. Some might question search further and try to find a more convincing moral philosophy. Some would just carry go on living there lives and not caring.
Furthermore, the outcome would be different if you could simultaneously convince everyone in a society, and give everyone the knowledge that everyone had been convinced. Perhaps the society would brea... (read more)
The existence of objective moral values seems to have been a topic in the discussion below. I would like to state my view on the matter, since it connects to the original article. I define objective moral values as moral values that exist independently of the existence of life.
I do not believe that any objective moral values exist and I usually argue as follows:
I ask three questions:
When did objective moral values come into existence?
Have we ever observed them or how can we observe them?
Do we need objective moral values to explain anything that we ... (read more)
The benefit of morality comes from the fact that brains are slow to come up with new ideas but quick to recall stored generalizations. If you can make useful rules and accurate generalizations by taking your time and considering possible hypotheticals ahead of time, then your behavior when you don't have time to be thoughtful will be based on what you want it to be based on, instead of television and things you've seen other monkeys doing.
Objective morality is a trick that people who come up with moralities that rely on co-operation play on people who can... (read more)
Modernized version as of 2017, of the first part of this post : http://18.104.22.168/trolley-lw.png
More serious reply: depending when you encountered me, I'd be more boring in some ways, since a lot of what I spend my time doing is towards a moral end. All the things I've learned in life I learned from trying to live in a moral universe. I would never have gotten a degree, I did that virtually entirely for what I perceived to be reasons of altruism. Since I'm assuming here that everyone else will continue to live under the illusion that they are in suc... (read more)
I would be depressed and do nothing at all, as empirically verified.
Gotta have _some_ answer to "what is good".
How did I reconcile this? What is the right morality when everyone's morality differs?
Well, mine, of course. What else?
I don't believe in objective morality in the first place.
My moral system has only one axiom:
Maximise your utility.
If nothing were right, I'd still go on maximising my utility. I don't try to maximise my utility because I believe utility maxismisation is some apriori "right" thing to—I try to maximise my utility because I want to. Unless your proof changed my desires (in which case I don't know what I would do), I expect I would go on trying to maximise my utility.
There would actually be several changes:
I would stop being vegan.
I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").
I would stop caring about Fairtrade.
I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.
If others are overly friendly, I would fully abuse this for my advantage.
I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the ... (read more)
If you know believe that nothing is right do the following:
I think after that I would just act like I normally do, as easily, without trying to do anything better. But yes, it would definitely not be a reason for me to change my behavior, to take some kind of active action.
I would probably end my life in that scenario. If nothing is right, and nothing is wrong, then there's simply no reason why I should care about anything, including myself.