To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"

    Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

    Devastating news, to be sure—and no, I am not telling you this in real life.  But suppose I did tell it to you.  Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place.  Suppose I proved that all utilities equaled zero.

    I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.

    Would you still tip cabdrivers?  Would you cheat on your Significant Other?  If a child lay fainted on the train tracks, would you still drag them off?

    Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?

    Would you wear black and write gloomy poetry and denounce all altruists as fools?  But there's no reason you should do that—it's just a cached thought.

    Would you stay in bed because there was no reason to get up?  What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?

    Would you go on reading Overcoming Bias, and if not, what would you read instead?  Would you still try to be rational, and if not, what would you think instead?

    Close your eyes, take as long as necessary to answer:

    What would you do, if nothing were right?

    New to LessWrong?

    New Comment
    186 comments, sorted by Click to highlight new comments since: Today at 8:02 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    Did you convinve me that nothing is morally right, or that all utilities are 0.

    If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.

    If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.

    If, 'convincing' includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).

    Why the assumption that morality is analysable with utilities?
    1[anonymous]9y has been shown in countless experiments that people do not behave in accordance with this theorem. So what conclusions do you want to draw from this? do realise there are many problems with rational choice theory right? See chapter 3 and 4 from 'Philosophy of Economics: A Contemporary Introduction' by Julian Reiss for a brief introduction to the theory's problems. If you can't get your hands on that, see lectures 4-6 from Philosophy of Economics: Theory, Methods, and Values for an even briefer introduction. ...what has this got to do with morality?
    I'm going to take a look at the lectures you linked later. For now: Your morals are your preferences; if you say that doing A is more moral than doing B, you prefer doing A to B (barring cognitive dissonance). So if preferences can be reduced to utilities, morality can be too. In fact, you'd have to argue that the axioms don't apply to morality, and justify that position.
    I highly doubt that morals are preferences, with or without what you (assumedly loosely) term cognitive dissonance. One can have morals that aren't preferences: If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn't prefer that - one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally "good" or "bad" without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn't exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people. And one can obviously have preferences which aren't morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so. We should also be wary of equivocating on what we mean by "preferences". Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don't do, and thus means most of the usages of the word "preference" above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as "concious desire" pretty often.
    I'm talking about personal morals here, i.e. "what should I do", which are the only ones that matter for my own decision making. For my own actions, the theorem shows that there must be some utility function that captures my decision-making, or I am irrational in some way. Even if preferences are distinct from morals, each will still be expressible by a utility function or fail some axiom. That example is one where the errors are so low that it doesn't make sense to spend time thinking about it. If you value your happiness and consider it good, then you ought to eat the chocolate, but it may represent so little utility that it uses more just to figure that out. When I say preference I mean "what state do you want the world to be in". The problem of akrasia is well known, and it means that our actions don't always express our preferences. Preferences should be over outcomes, while actions are not. An imbalance can be akrasia, or the result of a misprediction. Regardless of how you define preference, if it meets the axioms then it can be expressed as a utility function. So every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing.
    Oh, so now you're just talking about personal morals. One of my examples already covered that: 'One can believe one ought to do something, without wanting to do it'. Why the presumption that utility functions capture decision-making? You acknowledge that preferences and hence utilities don't always lead to decisions. And why the assumption that not meeting the axioms of rational choice theory makes you irrational? Morality might not even be appropriately described by the axioms of rational choice theory; how can you express everyone's moral beliefs as real numbers? On the chocolate example, I can think I ought not eat the chocolate, but nevertheless prefer to eat it, and even actually eat; so your counterargument does not work. Given that you are not claiming all preferences meet the axioms - only "rational" preferences do (where's your support?) - you cannot say 'every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing'. And again, we ought to ask ourselves whether preferences or rational preferences are actually the right sort of thing to be expressed by the axioms; can they really be expressed as real numbers?
    Which axiom do you think shouldn't apply? If you can't give me an argument why not to agree with any given axiom, then why shouldn't I use them? Obviously, if I prefer X to Y, and also prefer Y to X, then I'm being incoherent and that can't be captured by a utility function. I expressly outlaw those kind of preferences. Argue for a specific form of preference that violates the axioms.
    If you can't give me an argument as to why all your axioms apply, then why should I accept any of your claims? A specific form of preference that violates the axioms? Any preference which is "irrational" under those axioms, and you already acknowledged preferences of that sort existed.
    I see no counterexamples to any of the axioms. If they're so wrong, you should be able to come up with a set of preferences that someone could actually support. You need to argue that those are useful in some sense. Preferring A over B and B over A doesn't follow the axioms, but I see no reason to use such systems. Is that really your position, that coherence and consistency don't matter?
    As an extremely basic example: I could prefer chocolate ice cream over vanilla ice cream, and prefer vanilla ice cream over pistachio ice cream. Under the Von Neumann-Morgenstein axioms, however, I cannot then prefer pistachio to chocolate because that would violate the transitivity axiom. You are correct that there is probably someone out there who holds all three preferences simultaneously. I would call such a person "irrational". Wouldn't you?

    Ugh, sorry about the typos, I am commenting from a cell phone, and have clumsy thumbs.

    First, can you clarify what you mean by "everything is permissible and nothing is forbidden"?

    In my familiar world, "permissible" and "forbidden" refer to certain expected consequences. I can still choose to murder, or cheat, blaspheme, neglect to earn a living, etc; they're only forbidden in the sense of not wanting to experience the consequences.

    Are you suggesting I imagine that the consequences would be different or nonexistent? Or that I would no longer have a preference about consequences? Or something else?

    "Morality" generally refers to guidelines on one of two things:

    (1). Doing good to other sentients. (2). Ensuring that the future is nice.

    If you wanted to make me stop caring about (1), you could convince me that all other sentients were computer simulations who were different in kind than I was, and that there emotions were simulated according to sophisticated computer models. In that case, I would probably continue to treat sentients as peers, because things would be a lot more boring if I started thinking of them as mere NPCs.

    If you wanted to ... (read more)

    Well I've argued that shoulds are overrated, that wants are enough. I really can't imagine you convincing me that I don't want anything more than anything else.

    I'd do everything that I do now. Moral realism demolished.

    "Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden."

    First Existential Crisis: Age 15

    "Would you wear black and write gloomy poetry and denounce all altruists as fools?"

    Been there, done that.

    "But there's no reason you should do that - it's just a cached thought."

    Realized this.

    "Would you stay in bed because there was no reason to get up?"

    Tried that.

    "What about when you finally got hungry and stumbled into the kitchen - what would you do after you were done eating?"

    Stare at the wall.

    "Would you go on reading Overcoming Bias, and if not, what would you read instead?"

    Shakespeare, Nitzsche

    "Would you still try to be rational, and if not, what would you think instead"

    No-- Came up with entire philosophy of "It doesn't matter if anything I say, do, or think is consistent with itself or each other... everything in my head has been set up by the universe- my parents ideas of right and wrong- television- paternalistic hopes of approving/forgiving/nonexistent god and his ability to grant immortality, so why should I worry about trying to put it together in any kind of sensible fashion? Let it all sort itself out...

    "What would you do, if nothing were right?" What felt best.

    Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good ... (read more)

    I guess logically I would have to do nothing, since there would be no logical basis to perform any action. This would of course be fatal after a few days, since staying alive requires action.

    (I want to emphasize this is just a hypothetical answer to a hypothetical question - I would never really just sit down and wait to die.)

    If it's not what you would really do, you're not answering the question.

    I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.

    So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.

    Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

    Suppose I proved that all utilities equaled zero.

    If I still feel hunger then food has an utility > 0. If I don't feel anything anymore, then I wouldn't care about anything.

    So our morality is defined by our emotions. The decisions I make are a tradeoff. Do I tip the waiter? Depends on my financial situation and if I'm willing to endure the awkwardness of breaking a social convention. Yes, I've often eaten wit... (read more)

    I have thought on this, and concluded that I would do nothing different. Nothing at all. I do not base my actions on what I believe to be "right" in the abstract, but upon whether I like the consequences that I forecast. The only thing that could and would change my actions is more courage.

    Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)

    One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.

    The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.

    Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.

    I am already fairly well convinced of this; I am hoping against hope you have something up your sleeve to change my mind.

    I had this revelation sometime back. I tried living without meaning for a week, and it turn out that not a whole lot changed. Oops?

    Like many others here, I don't believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.

    In general, I'd go back to being an amoralist.

    My-Moral-Philosophy is either as true as 2+2=4 or as true as 2+2=5, I'm not sure. or 0.0001*1>0.

    If it is wrong, then it's still decent as philosophy goes, and I just won't try to use math to talk about it. Though I'd probably think more about another system I looked at, because it seems like more fun.

    But just because it's what a primate wants doesn't mean it's the right answer.

    @Ian C and Tiiba: Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational cho... (read more)

    Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.

    But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.

    For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you... (read more)

    I don't know to what extent my moral philosophy affects my behavior vs. being rationalization of what I would want to want anyway. Ignoring existential despair (I think I've gotten that out of my system, hopefully permanently) I would probably act a little more selfish, although the apparently rational thing for me to do given even total selfishness and no empathy (at least with a low discount rate and maybe a liberal definition of "self") is not very different from the apparently rational thing given my current morality.

    I know that random behavior requires choices. The machine IS choosing - but because all choices are equal, the result of "max(actionList)" is implementation-dependent. "Shut down OS" is in that list, too, but "make no choice whatsoever" simply doesn't belong there.

    Isn't this the movie Groundhog Day, but with certain knowledge that the world will reset daily forever? No happy ending.

    I'd just get really, really bored. Studying something (learning the piano, as he does in the movie) would be the only open-ended thing you could do. Otherwise, you'd be living forever with the same set of people, and the same more-or-less limited set of possibilities.

    Since my current moral system is pretty selfish and involves me doing altruistic things to make me happy, I wouldn't change a thing. At first glance it might appear that my actions should be more shortsighted since my long-term goals wouldn't matter, but my short-term goals and happiness wouldn't matter just as much. Is this thought exercise another thing that just all adds up to normality?

    James Andrix 'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'

    Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.

    'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc

    Well, to start with I'd keep on doing the same thing. Just like I do if I discover that I really live in a timeless MWI platonia that is fundamentally different to what the world intuitively seems like.

    But over time? Then the answer is less clear to me. Sometimes I learn things that firstly affect my world view in the abstract, then the way I personally relate to things, and finally my actions.

    For example, evolution and the existence of carnivores. As I child I'd see something like a hawk tearing the wings off a little baby bird. I'd think that the ha... (read more)

    I'd behave exactly the same as I do now.

    What is morality anyway? It is simply intuitive game theory, that is, it's a mechanism that evolved in humans to allow them to deal with an environment where conspecifics are both potential competitors and co-operators. The only ways you could persuade me that "nothing is moral" would be (1) by killing all humans except me, or (2) by surgically removing the parts of my brain that process moral reasoning.

    Eliezer, I've got a whole set of plans ready to roll, just waiting on your word that the final Proof is ready. It's going to be bloody wicked... and just plain bloody, hehe.

    Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid. But of course nobody wants to publicly admit that they'd personally do more cheating, stealing, and murdering. So everyone is just responding with variants of "Of course I wouldn't do anything different. No sir, not me!"

    Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)

    The post says "when you finally got hungry [...] what would you do after you were done eating?", which I take to understand that I still have desire and reason to eat. But it also asks me to imagine a proof that all utilities are zero, which confuses me because when I'm hungry, I expect a form of utility (not being hungry, which is better than being hungry) from eating. I'm probably confused on this point in some manner, though, so I'll try to answer the question the way I understand it, which is that the more abstracted/cultural/etc utilities ar... (read more)

    I hope I'd hold the courage of my convictions enough to commit suicide quickly. You would have destroyed my world, so best to take myself out completely.

    I believe that "nothing is right or wrong", but that doesn't affect my choices much. There is nothing inconsistent with that.

    It's pretty evident to me that if you convinced me (you can't, you'd have to rewire my brain and suppress a handful of hormonal feedbacks - but suppose you did) that all utilities were 0, I'd be dead in about as long as total neglect will kill a body - a couple of days for thirst, perhaps. And in the meantime I'd be clinically comatose. No motive implies no action.

    It's like asking how our world would be if "2 + 2 = 5." My answer to that would be, "but it doesn't."

    So unless you can convince me that one can exist without morality, then my answer is, "but we can't exist without morality."

    I suspect I am misunderstanding your question in at least a couple of different ways. Could you clarify?

    I think I already believe that there's no right and wrong, and my response is to largely continue pretending that there is because it makes things easier (alternatively, I've chosen to live my life by a certain set standards, which happen to coincide with at least some versions of what others call morality --- I just don't call them "moral"). But the fact that you seem to equate proving the absence of morality with proving all utilities are zer... (read more)

    Wow, there are a lot of nihilists here.

    I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.

    It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.

    Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or fal... (read more)

    I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.

    I think my behavior would be driven by needs alone. However, I have some doubts. Say I needed money and decided to steal. If the person I stole from needed the money more than I did and ended up hurting as a result, with or without a doctrine of wrong & right, wouldn't I still feel bad for causing someone else pain? Would I not therefore refrain from stealing from that person? Or are you saying that I would no longer react emotionally to the consequences of my actions? Are my feelings a result of a learned moral doctrine or something else?

    I'd do everything I do now. You can't escape your own psychology and I've already expressed my skepticism about the efficacy of moral deliberation. I'll go further and say that nobody would act any differently. Sure, after you shout in from the rooftops, maybe there will be an upsurge in crime and the demand for black nail polish for a month or so but when the dust settled nothing would have changed. People would still cringe at the sight of blood and still react to the pain of others just as they react to their own pain. People would still experience guil... (read more)

    Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
    I'd do precisely the same thing I would do upon being informed that an irresistible force has just met an immovable object:

    Inform the other person that they didn't know what they were talking about.

    Nothing is right, you say? What a very curious position to take.

    Does the fact that I'd do absolutely nothing differently mean that I'm already a nihilist?

    There is no rationally justifiable way that any rational being "should" act.

    How do you know?

    A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would ... (read more)

    Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

    There are different ways of understanding that. To clarify, let's transplant the thought experiment. Suppose you learned that there are no elephants. This could mean various things. Two things it might mean:

    1) That there are no big mammals with trunks. If you see what you once thought was an elephant stampeding in your direction, if you stay still nothing will happen to you because it is not really there. If yo... (read more)

    If I were actually convinced that there is no right or wrong (very unlikely), I would probably do everything I could to keep the secret from getting out.

    Even if there is no morality, my continued existence relies on everyone else believing that there is one, so that they continue to behave altruistically towards me.

    Pablo Stafforini A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or th... (read more)

    Dynamically Linked: I suspect you have completely misrepresented the intentions of at least most of those who said they wouldn't do anything differently. Are you just trying to make a cynical joke?

    I would play a bunch of video games -- not necessarily Second Life, but just anything to keep my mind occupied during the day. I would try to join some sort of recreational sports league, and I would find a job that paid me just enough money to solicit a regular supply of prostitutes.

    Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
    I'm a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that's a result of my physical structure. I don't really know the algorithm I use for assigning utility, but that's because my design is pretty messed up. Nevertheless, there is an algorithm, and it's what I talk about when I use the words "right" and "wrong".
    Moral rightness is fu... (read more)

    Dynamically Linked said:

    Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid.

    That's not a safe guess at all. And in fact, is likely wrong.

    You observe that (most?) moral philosophies suggest your list of sins are "wrong". But then you guess that people tend not to do these things because the moral philosophies say they are wron... (read more)

    I find this question kind of funny. I already feel that "that everything is permissible and nothing is forbidden", and it isn't DEVASTATING in the least; it's liberating. I already commented in this under "Heading Towards Morality". Morals are just opinions, and justification is irrelevant. I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex. I think it can be jarring, certainly, to make the transition to such extreme relativism, but I would not call it devastating, necessarily.

    The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.

    Another perspective on the meaning of morality:

    On one had there is morality as "those things which I want." I would join a lot of people here in saying that I think that what I want is arbitrary in that it was caused by some combination of my nature and nurture, rather than being in any fundamental way a product of my rationality. At the same time I can't deny that my morality is real, or that it governs my behavior. This is why I would call myself a moral skeptic, along the lines of Hume, rather than a nihilist. I also couldn't become an ego... (read more)

    Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.

    Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.

    Once again, I most certainly would act differently if I thought that nothi... (read more)

    Unknown: I don't think that it is morally wrong to accuse people of lying. I think it detracts from the conversation. I want the quality of the conversation to be higher, in my own estimation, therefore I object to commenters accusing others of lying. Not having a moral code does not imply that one need be perfectly fine with the world devolving into a wacky funhouse. Anything that I restrain myself from doing, would be for an aversion to its consequences, including both consequences to me and to others. I agree with you about the fallacy of projecting, and it runs both ways.

    Pablo- I have not yet resolved whether I should care about creating the 'positive' singularity for or more or less this reason. Why should I, the person I am now, care about the persistence of some completely different, incomprehensible, and unsympathetic form of 'myself' that will immediately take over a few nanoseconds after it has begun... I kind of like who I am now. We die each moment and each we are reborn- why should literal death be so abhorrent? Esp. if you think you can look at the universe from outside time as if it were just another dimension of space and see all fixed in some odd sense...

    Roland wrote:

    .I cannot imagine myself without morality because that wouldn't be me, but another brain.

    Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.

    Morality is not a ghost in the machine, but it is defined by the machine itself.

    Well put.

    I'd stop being a vegetarian. Wait; I'm not a vegetarian. (Are there no vegetarians on OvBias?) But I'd stop feeling guilty about it.

    I'd stop doing volunteer work and dona... (read more)

    The way I frame this question is "what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a ... (read more)

    Wow- far too much self-realization going on here... Just to provide a data point, when I was in high school, I convinced an awkward, naive, young catholic boy who had a crush on me of just this point... He attempted suicide that day.


    For follow up, he has been in a very happy exclusive homosexual relationship for the past three years.

    Maybe I didn't do such a bad thing...

    Eliezer, if I lose all my goals, I do nothing. If I lose just the moral goals, I begin using previously immoral means to reach my other goals. (It has happened several times in my life.) But your explaining won't be enough to take away my moral goals. Morality is desire conditioned by examples in childhood, not hard logic following from first principles. De-conditioning requires high stress, some really bad experience, and the older you get, the more punishment you need to change your ways.

    Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born. There's a dead little girl in every old woman.

    Dynamically linked:

    "Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)"

    How did you arrive at this conclusion? I said that discovering that all actions in life were worthless might eventually affect my behaviour. Via some leap in reasoning you arrive at the above. Care to explain this to me?

    My guess is that if I knew that all actions were worthless I might eventually stop doing anything. After all, if there's no point in doing anything, why bother?

    Are there no vegetarians on OvBias?
    I'm a vegetarian, though not because I particularly care about the suffering of meat animals.

    Sebastian Hagen, people change. Of course you may refuse to accept it, but the current you will be dead in a second, and a different you born.
    Of course people change; that's why I talked about "future selves" - the interesting aspect isn't that they exist in the future, it's that they're not exactly the same person as I am now. However, there's still a lot of similarity between my present self and my one-second-in-the-... (read more)

    Are there no vegetarians on OvBias?
    I'm one. (But I don't comment generally, just read.)

    I guess I don't properly understand the question. I don't know what "nothing is moral and nothing is right" means. To me, morality appears to be an internal thing, not something imposed from the outside: it's inextricably bound up with my desires and motives and thoughts, and with everyone else's. So how can you remove morality without changing the desires and motives and thoughts so that I would no longer recognise them as anything to do with me, or removing ... (read more)

    Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral". My point is, we can't trust the answers people give, because they want to believe, or want others to believe, that they are naturally good, that they don't need moral philosophies to tell them not to cheat, steal, or murder.

    BTW, Eliezer, I got the "enemies list" you sent last night. Rest assured, my robot army will target them with the highest priority. Now stop worrying, and finish that damn proof already!

    Dynamically: It appears that you have a fixed preconception of what behavior "human nature" requires, and you will not accept answers that don't adhere to that preconception.

    A human being will never be able to discard all concepts of morality. In a world without utility differences, a state of existence (living) and a state of non-existence (death) are equivalent. But we can't choose both at the same time.

    I'd assume the proof was faulty, even if I couldn't spot the flaw.

    On the topic of vegetarianism, I originally became a vegetarian 15 years ago because I thought it was "wrong" to cause unnecessary pain and suffering of conscious beings, but I am still a vegetarian even though I no longer think it is "wrong" (in anything like the ordinary sense).

    Now that I no longer think that the concept of "morality" makes much sense at all (except as a fancy and unnecessary name for certain evolved tendencies that are purely a result of what worked for my ancestors in their environments (as they have expre... (read more)

    It's hard for me to figure out what the question means.

    I feel sad when I think that the universe is bound to wind down into nothingness, forever. (Tho, as someone pointed out, this future infinity of nothingness is no worse than the past infinity of nothingness, which for some reason doesn't bother me as much.) Is this morality?

    When I watch a movie, I hope that the good guys win. Is that morality? Would I be unable to enjoy anything other than "My Dinner with Andre" after incorporating the proof that there was no morality? Does having empathi... (read more)

    For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

    BTW, I found an astonishing definition of morality in the President's Council on Bioethics 2005 "Alternative sources of human pluripotent stem cells: A white paper", in the section on altered nuclear transfer. They argued that ANT may be immoral, because it is immoral to allow a woman to undergo a dangerous procedure (egg extraction) for someone else's benefit. In other words, it is immoral to allow someone else to be moral.

    This means that the moral thing to do, is to altruistically use your time+money getting laws passed to forbid other people to be moral. The moral thing for them to do, of course, is to prevent you from wasting your time doing this.

    Unknown: of course it would make a difference, just as my behavior would be different if I had billions of dollars rather than next to nothing or if I were immortal rather than mortal. It doesn't have anything to do with "morality" though.

    For example, if I had the power of invisibility (and immateriality) and were able to plant a listening device in the oval office with no chance of getting caught, I would do it in order to publicly expose the lies and manipulations of the Bush administration and give proof of the willful stupidity and rampant di... (read more)

    To tell the truth, I expected more when I first heard of this blog.

    You pose this question as if morality is a purely intellectual construct. I do what I do not because it's moral or immoral, but because I think of the consequences. For example, if I only held myself from killing people because my religion told me so, and I was suddenly reassured by it that killing was all right, I could still figure out that going out and harming others wouldn't keep me unharmed for long.

    "What would you do, if nothing were right?"

    Scenario A
    Unless I desired to try to live in a world where I knew nothing were right, I might die of mortal dehydration or mortal starvation, one of which might result from my inaction. After all, it takes more resources and bodily effort to live than it does to die. Then again, it might take more psychological effort to allow myself to die of inaction than it would take bodily effort to try to live. Or it might take more effort to try to not desire to live than it would to just try to live. But then ag... (read more)


    I would expect that people would probably expect or even demand more justification, but I don't think that the icy unfeeling mechanisms of the universe would sense significance in certain sentiments but not others; it would be a strange culture that thought nothing of murder but scrutinized everyone's personal pie preferences, but I don't see that as entirely impossible.

    Sorry, I misread the post, I meant to address my response to Phil.

    I very much look forward to posts from Eliezer regarding whether the responses seen in this thread are in line with what he was expecting.


    For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

    Sure. I could get away with doing all sorts of things. No doubt the initial novelty and power rush would cause me to do some things that would be quite perverted and that I'd feel guilty about. I don't think that's the same as a world without morality though. You seem to view morality as a constraint whereas I view it as a folk theory that describes a subset of human behavior. (I take Eliezer to mean that we're rejecting morality at an intellectual level rather than rewiring our brains.)

    Since that's already what I believe, it wouldn't be a change at all. I must admit though that I didn't tip even when I believed in God, but I was different in a number of ways.

    I think the world would change on the margin and that Voltaire was right when he warned of the servants stealing the silverware. The servants might also change their behavior in more desirable ways, but I don't know whether I'd prefer it on net and as it doesn't seem like a likely possibility in the foreseeable future I am content to be ignorant.

    All: I'm really disappointed that no-one else seems to have found my "after the FAI does nothing" frame useful for making sense of this post. Is anyone interested in responding to that version? It seems so much more interesting and complete than the three versions E.C. Hopkins gave.

    Dynamically: My "moral philosophy" if you insist on using that term (model of a recipe for generating a utility function considered desirable by certain optimizers in my brain would be a better term) is the main thing that HAS told me to steal, cheat, an... (read more)

    Michael Vassar, I read that and laughed and said, "Oh, great, now I've got to play the thought experiment again in this new version."

    Albeit I would postulate that on every occasion, the FAI underwent the water-flowing-downhill automatic shutdown that was automatically enginereed into it, with the stop code "desirability differentials vanished".

    The responses that occurred to me - and yes, I had to think about it for a while - would be as follows:

    *) Peek at the code. Figure out what happened. Go on from there.

    Assuming we don't allow th... (read more)

    I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.

    If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desi... (read more)

    Michael- I have repeatedly failed to understand why this upsets you so much, though it clearly does. It's hard for me to see why I should care if the AI does a pretty fireworks display for 10 seconds or 10,000 years. Perhaps you need to find more intuitive ways of explaining it. A better analogy? At some points you just seem like a mystic to me...

    Also Mike- the first portion of your argument was written in such a confusing manner that I had to read it twice, and I know the way you argue... don't know if anyone who didn't already know what you were talking about would have kept reading.

    I'm still trying to understand what Eliezer really means by this question. Here is a list of a few reasons why I don't kill the annoying kid across the street. Which of these reasons might disappear upon my being shown this proof?

    1. The kid and his friends and family would suffer, and since I don't enjoy suffering myself, my ability to empathise stops me wanting to.

    2. I would probably be arrested and jailed, which doesn't fit in with my plans.

    3. I have an emotional reaction to the idea of killing a kid (in such circumstances -- though I'm not actuall... (read more)

    This is a spectacularly ill-posed question. For one thing, it seems to blur the distinction between morality and values in general, by asking such questions like "Would you stay in bed because there was no reason to get up?" What does that have to do with morality?

    When you get rid of a sense of values, the result is clinical depression (and generally, a non-functional person). When you get rid of a sense of morality, the result is a psychopath. Psychopaths, unlike the depressed, are quite functional.

    So the question reduces to, what would yo... (read more)

    mtraven: many of the posters in this thread -- myself included -- have said that they don't believe in morality (meaning morality and not "values" or "motivation"), and yet I very highly doubt that many of us are clinically psychopaths.

    Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.

    Not having read the other comments, I'd say Eliezer is being tedious.

    I'd do whatever the hell I want, which is what I am already doing.

    I think the point of this post is that people are already doing what they want and, lo and behold, people are behaving morally (for the most part) with or without the permission of moral philosophers. To me, and I'm pretty sure all of you, would still act morally. I would still abstain from murdering people and I'd still tip delivery drivers. We already know (at least the gist) of what morality is. I think the other point of this post is that even if the relativists were right, we'd still act the same. (Although, I would be remiss if I didn't mention that I have heard religious people outright say that they would kill and steal if they learned god didn't exist. This is the only silver lining that I am willing to concede to those who say that religion has indespensible social utility; that it keeps leashes on these psychopaths.)

    mtraven: "Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality."

    Exactly. Logically, I can agree entirely with Marquis de Sade, and yet when reading Juliette, my stomach turns around about page 300, and I just can't read any more about the raping and the burning and the torture.

    It... (read more)

    michael vassar: I meant "horrible" from my current perspective, much like I would view that future me as psychopathic and immoral. (It wouldn't, or if it did, it would consider them meaningless labels.)

    Dynamically Linked: I'm using my real name and I think I'd do things that I (and most of the people I know) currently consider immoral. I'm not sure about using "admit" to describe it, thought, as I don't consider it a dark secret. I have a certain utility function which has a negative valuation of a hypothetical future self without the s... (read more)

    Unknown: "For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?"

    What sort of stupid question is this? :-) But of course! If I gave you a billion dollars, would it make any difference to your behavior? :-)

    I am not a moral realist, thus I imagine my behaviour wouldn't change all that much.
    My motivation to act one way or the other in any situation is based on a few things: my sense of rightness or wrongness, though other factors may override them (thirst, hunger, lust, etc), not on whether or not the act is "truly" right - I'm not sure what that would mean. I am skeptical of rightness being a property of certain acts in the world; I have not seen convincing evidence of their existence.
    I nonetheless have this sense of right and wrong that I think about often, and revise according to other things I value (logical consistency being the most significant one, I think).

    It depends on how you disproved my morality.

    As far as I can tell, my morality consists of an urge to care about others channeled through a systematization of how to help people most effectively. Someone could easily disprove specifics of the systematization by proving something like that giving charity to the poor only encourages their dependence and increases poverty. If you disproved it that way, I would accept your correction and channel my urge to care differently.

    But I don't think you could disprove the urge to care itself, since it's an urge and does... (read more)

    What would I do?

    I'd make a like a typical nihilistic postmodernist and adopt the leftist modus operandi of decrying the truth and moral content of everyone's arguments except my own.

    Morality is not a set of beliefs; it's part of the basic innate functionality of the human brain. So you can't "disprove" it any more than you can disprove balance, or grammar.

    I agree with mtraven's last post that morality is an innate functionality of the human brain that can't be "disproved", and yet I have said again and again that I don't believe in morality, so let me explain.

    Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

    A lot of confu... (read more)

    Notice how nobody is willing to admit under their real name that they might do something traditionally considered "immoral".

    What tradition? Immoral at what time? Given several randomly-chosen traditional moral systems, I'm fairly sure we could demonstrate that any one of us is not only willing to admit to violating at least one of them, but actually proud of that fact.

    You lot are like Lovecraft, gibbering at the thought of strange geometries, while all along the bees continue building their hexagonal cells.

    Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

    To use Eliezer's terminology, you seem to be saying that "morality" is a 2-place word:

    Morality: Species, Act -> [0, ∞)

    which can be "curried", i.e. can "eat" the first input to become a 1-place word:

    Homosapiens::Morality == Morality_93745

    What would I do?

    When faced with any choice, I'd try and figure out my most promising options, then trace them out into their different probable futures, being sure to include such factors as an action's psychological effect on the agent. Then I'd evaluate how much I prefer these futures, acknowledging that I privilege my own future (and the futures of people I'm close to) above others (but not unconditionally), and taking care not to be shortsighted. Then I'd try to choose what seems best under those criteria, applied as rationally as I'm capable of.

    You know, the sort of thing that we all do anyway, but often without letting our conscious minds realize it, and thus often with some characteristic errors mixed in.

    Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true speci... (read more)

    It depends.

    My morality is my urge to care for other people, plus a systematization of exactly how to do that. You could easily disprove the systematization by telling me something like that giving charity to the poor increases their dependence on handouts and only leaves them worse off. I'd happily accept that correction.

    I don't think you could disprove the urge to care for other people, because urges don't have truth-values.

    The best you could do would be, as someone mentioned above, to prove that everyone else was an NPC without qualia. Prove that, and I'd probably just behave selfishly, except when it was too psychologically troubling to do so.

    I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history

    Are you sure about the traditional notions? I don't see how you can base that on how we have actually behaved visavis morality. We've been partially put to the test of whether we consider morality universally applicable, and the result so far is that we apply our moral judgments to other ... (read more)

    Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean "notion" among thinkers, not among average people who haven't given much though to such things.) Your second paragraph isn't in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.

    It's a differe... (read more)

    I became a convinced of moral Anti Realism by Joshua Greene and Richard Joyce. Took me about a year to get over it. So, not a casual nihilist. And no, arguments that one should be rational have no normative force either, as far as I can see. The only argument for rationality would be a moral one. Anyway, I became a consequentialist like Greene suggested....

    I'd think Eliezer was funnin' me. Whenever any committed empiricist purports to have a proof of any claim beginning with "There are no X such that..." or "For all X..." I know he's either drunk or kidding.

    If it seemed that Eliezer actually believed his conclusion, I'd avoid leaving my wallet within his reach.

    All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is.

    But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic,... (read more)

    But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.

    What I am really saying is that the notion of "morality" is so hopelessly contaminated with notions... (read more)

    Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

    either I would become incapable of any action or choice, or I wouldn't change at all, or I would give up the abstract goals and gradually reclaim the concrete ones.

    I'd like to put forth the idea that there is a mental condition for this : sociopathy. It affects around 4% of the population. Dr. Martha Stout has a good insight as to how the world works if you are amoral:

    What would I do if you destroyed my moral philosophy?

    Well, empathy for others is built into me (and all other non-psychopaths) whether I like it or not. It isn't really affected by propositions. So not much would really change. Proving that moral truths didn't exist would free us all up to act "however we like," but I can still pigheadedly "like" to be nice.

    What did you mean by "all utilities are 0"?

    Utility Functions are a way to represent preferences, such that states of the universe that map to larger numbers are more desirable. If every state of the universe mapped to the same utility, for example 0, that represents having no preference about anything at all. It looks like you got the core point of this article.
    Yeah, I'm somewhat familiar with the concept of utility... I suppose what I wanted clarified was "utility for whom," but I guess it's obvious Eliezer was being tongue-in-cheek about this. Still, it's surprising how often you find people saying "nothing matters, because the universe is heading toward heat death/there is no afterlife/we're just chemicals." What can you do but laugh and remember the opening of Annie Hall? :)

    To be perfectly honest, if I had my morality stripped away, and I thought could get away with it, I'd rape as many women as possible.

    Not joking; my tastes already run towards domination and BDSM and the like, and without morality, there'd be no reason to hold back for fear of traumatizing my partners, other than the fear of the government punishing me for doing so.

    Your honesty is appreciated. Personally, I would aim to change things so that the attainment of any goal whatsoever is possible for me to achieve. Essentially, to modify myself into a universe conquering, unfriendly super-intelligence. But why rape? I mean, it just seems so arbitrary and trivial...

    Well, I already think the universe and human existence is literally pointless because we just happened. Nothing you do has an intrinsic point and you are going to die[*]. (Also, this is intrinsically hilarious.)

    So I expect I'll keep on doing what I'm doing, which is trying to work out what I actually want. This is a question that has lasted me quite a few years so far.

    So far I haven't lapsed into nihilist catatonia or killed everyone or destroyed the economy. This suggests that assuming a morality is not a requirement for not behaving like a sociopath. I h... (read more)

    For me, utility is just a metaphor I use for expressing how much I value different world-states and thus what importance I give to helping them come into existence (or, in the case of world-states with negative utilities, what importance I give to preventing them from coming into existence.) You couldn't prove that these equaled zero because it's a purely subjective measurement.

    Thus, after a bout of laughter, I would inform you of this, and probably give you some kind of pep talk so you didn't go emo and be destructive while you rebuilt your utility system, if you hadn't already.

    Then, I would live life as I had before, hoping to eliminate a whole lot of suffering.

    I don't understand this post. Asking me to imagine that all utilities equal zero is like asking to imagine being a philosophical zombie. I'd do exactly the same as before of course.

    I'm pretty sure that's the entire point.
    That's what I'd do too. If all utilities equal 0, then there's no reason not to act as though utilities are non-zero. There's also no reason to privilege any set of utilities over any other set. Firstly this means that if there's any probability that utilities don't really all equal zero (maybe EY's proof is flawed, maybe my brain made an error in hearing the proof and it really proves something else entirely...) then the p-mass on "all utilities are 0" should have no effect on my decisions. If it actually is true, with probability 1 (which EY says doesn't exist, but I'm not sure whether that's true[*]), then I have no reason to behave differently, nor any reason to behave the same, so in some sense I "may as well" behave the same - but I can't formalise this, because of course there's no negative utility attached to "changing one's behaviour". I wonder if it can be got out of a limit - whether my behaviour in the limit as P(all utilities are 0) goes to 1 ought to define my behaviour when it equals 1 - but defining behaviour of limit to equal limit of behaviour is precisely what makes unbounded utility functions Dutch-bookable (as EY showed in Trust in Bayes). So... I'd behave exactly as I do now, believing in utility functions, but I can't justify that if I know for certain that all utilities are 0. Given that I haven't thus far accepted the argument that '0 and 1 are not probabilities', this is disturbing and confusing, hence maybe I should accept that argument; at least, updating on this has caused me to raise my probability estimate that 0 and 1 are not probabilities. [*] If I were sure that ¬\exist X : P(X) = 1, then P(¬\exist X : P(X) = 1) = 1, in which case things break. A formal system can't talk about itself coherently. (That 'coherently' is necessary, because Gödel numberings do allow PA to do something that looks to us like "talk about itself", but you can't conclude PA is talking about itself unless you have some metatheory outside PA, which ends up re

    Imagining a state wherein all utilities are 0 is somewhat difficult for me... as I hold to a primarily egoistic morality, rather than a utilitarian one. Things primarily have utility in that they are useful to me, and that's not a state of affairs that can be stripped from me by some moral argument.

    The only circumstance that I can conceive of that could actually void my morality like that would be the combination of certain knowledge of my imminent demise, formed in such away as to deny any transhuman escape clause. Such a case might go something like,... (read more)

    I once asked a friend a similar question. His answer was, "Everything."

    If heaven and Earth, despoiled of its august stamp could ever cease to manifest it, if Morality didn't exist, it would be necessary to invent it. Let the wise proclaim it, and kings fear it.

    A nice hypothetical. If people are divorced from ideological "shoulds", they will quickly find that they still have drives and preferences that operate a lot like them.

    It's interesting to follow the argument, and see where you are going with this. So far, so good, but I expect I'll be disappointed in the end. Only the day after tomorrow belongs to me.

    That is a sufficiently large light switch. Flipping it has an influence on my mind far greater than the thermal noise at 293K.

    As far as I am aware, I am not a separate fact from my morality. I am perhaps instead a result of it. In any event, the mind I have now returns a null value when I ask it to dereference "Me_Without_A_Morality". It certainly doesn't return a model of a mind, good, evil, or somehow neither, which I might emulate for a few steps to consider what it would do.

    I'm pretty sure I would come up with a reason to continue behaving as today. That's what I did when I discovered, to my horror, that good and bad were human interpretations and not universal mathematical imperatives. Or are you asking what the rational reaction should be?

    I would follow my emotional sentiments only, instead of rational moral arguments, for deciding my wants. I would still put a small degree of effort into being rational in order to achieve them,

    nothing is moral and nothing is right;

    everything is permissible and nothing is forbidden.

    While these are equivalent (a utility function that always evaluates to 0 is equivalent to one that always evaluates to 1, yada yada yada), they “feel” opposite to me: “nothing is moral and nothing is right” would have the connotations of “nothing is permissible and everything forbidden”, and “everything is permissible and nothing is forbidden” would have the connotations of “everything is moral and everything is right”, or “nothing is immoral and nothing is wrong”.

    When I attempt to picture myself in a state of 'no moral wrongs', I get myself as I am. Largely, I don't act morally out of a sense of rightness, but out of enlightened self-interest. If I think I will not be caught, I act basically according to whim.

    If you successfully convinced me that there was no morality, I wouldn't rationally choose to do anything, I'd just sit there, since I wouldn't believe that I should do anything. I'd probably still meet my basic bodily needs when they became sufficiently demanding, since I wouldn't suppress them (I'd have no reason to), but beyond that, I'd do nothing.

    Not sure I understand this properly. Why not do something?
    Because I'd have no reason to. To clarify, I don't mean that I'd literally not do anything, I mean that I wouldn't have a reason to do anything. I would still have impulses that would cause me to do things. But I wouldn't do anything more complicated than feed myself when I'm hungry.
    So you don't have any impulse to relieve your own boredom, or to spend time with other people, or to seek out better-tasting food?
    Fulfilling those impulses would require significant conscious deliberation, and (unlike not eating/drinking) not fulfilling them would not be extremely unpleasant, so if I deliberated on them, I'd think "I have this impulse, but why should I fulfill it?" and I wouldn't fulfill it. In the case of food, I'd also think "I have this impulse, but why should I fulfill it?", but if I'd wait long enough, I'd feel so hungry that my deliberative process would be overridden. So, it takes not just having an impulse, but having an impulse strong enough to override conscious decisionmaking.
    Wouldn't it be easier to just go with those impulses?
    Perhaps, but why should I do what's easier?
    Basically I'm confused as to what process you went through to decide that sitting around doing precisely nothing is what you'd do. There's nothing that comes to mind to weight it over other options, and you seem pretty determined to stick to it.
    To do anything that requires thought/deliberation, I would have to choose to do it, and I'd have no reason to choose to do it, so I would remain in the default state, which is doing nothing (beyond relieving instinctual needs). Currently, I have reasons to do what I do, but if it were proven to me that there were no morality, it would also have to be proven that there are no reasons why I should do anything.
    That doesn't answer anything, really. All you've done is wrapped the same thing in some extra words. That doesn't seem to be anything resembling a "default state" to me, for instance, since humans tend to do a lot more than that even when they're not thinking about morality.
    I suspect we're using the term "morality" differently.

    There are several things wrong with this post. Firstly, I'm sure different people would react to being convinced their moral philosophy was wrong in different ways. Some might wail and scream and commit suicide. Some might question search further and try to find a more convincing moral philosophy. Some would just carry go on living there lives and not caring.

    Furthermore, the outcome would be different if you could simultaneously convince everyone in a society, and give everyone the knowledge that everyone had been convinced. Perhaps the society would brea... (read more)

    That's not true. Our relationship to intuition is just more complex.
    Huh. And there you had me thinking you two had split up. So are you two in an open relationship, or what?
    The facebook relationship status would be "It's complicated". Basically Kahneman did find out that intuition or System I is quite useful. Various people in decision science manage to run study indicating that heuristics are important and this community is aware of that. CFAR speaks about integrating system I and system II.
    Yeah...what are the chances that in 50 years time psychologists and neurophysicists still believe system I and II are useful heuristics to describe brain processes?
    There's a reason why I said "It's complicated". I don't believe system I and system II to be perfect terms and I doubt the majority of LW thinks the terms are perfect.
    Without further information, it's difficult to say. That being said, it's the best model we have right now. Unless you have a better model to offer, questioning the validity of the latest in current neuroscience is unlikely to be productive.
    Not so bad, I think. I'd give roughly equal probability to (1) substantially the same dichotomy still being convenient, though perhaps with different names, (2) more careful investigation having refined the ideas enough to require a change in terminology (e.g., maybe it will turn out that what Kahneman calls "system 1" is better considered as two related systems, or something), and (3) the idea being largely abandoned because what's really going on turns out to be very different and it's just good/bad luck that the system 1 / system 2 dichotomy looks good in the early 21st century. Even in case 3 I would expect there to be some parallels between system 1 / system 2 and whatever replaces it. There doesn't seem to be much doubt that our brains do some things quickly and without conscious effort and some things slowly and effortfully, or that there are ways in which the quick effortless stuff can go systematically wrong.
    Nevertheless, the use of this currently tenuous scientific theory to found our entire understanding of intuition would seem a little bit premature, especially if the theory contradicts what other influential and valued institutions have had to say about intuition (for instance, philosophy).
    We should found our understanding of intuition (or anything else) on the best information we currently have. Whether something's likely to be overthrown in the next 50 years is obviously related to how much we should trust it now for any given purpose, but not all that tightly. (For instance: we know that current theories of fundamental physics are wrong because we have no theory that encompasses both GR and QFT; but I for one am extremely comfortable assuming these theories are right for all "everyday" purposes -- both because it seems fairly certain that whatever new discoveries we make will have little impact on predictions governing "everyday" events, and because at present we have no good rival theories that make different predictions and seem at all likely to be correct. The use of the "system 1 / system 2" dichotomy here on LW doesn't appear to me to depend much on subtle details of what's going on. It looks to me -- though I am not an expert and will willingly be corrected by those who are -- as if we have quite robust evidence that some human cognitive processes are slow, under conscious control, and about as accurate as we choose to take the trouble to make them, while others are fast, not under conscious control, highly inaccurate in some identifiable circumstances, and hard to make much more accurate. And it doesn't look to me as if anything on LW requires much more than that. (Maybe some of CFAR's training makes stronger assumptions; I don't know.) What matters is not how influential and valued those institutions are, but what reason we have to think they're right in what they say about intuition. "Philosophy" is of course a tremendously broad thing, covering thousands of years of human endeavour. What (say) Plato thought about intuition may be very interesting -- he was very clever, and his opinions were influential -- but human knowledge has moved on a lot since his day, and in so far as we want our ideas about intuition to be correct we should give
    This seems a bizarre claim. If you think the conclusion that EY is intuition-pumping to advocate for is false (which you seem to, given your first two paragraphs), surely that's a more fundamental flaw than the fact that he's intuition-pumping to advocate for it. That said, I'll admit I don't really understand on what grounds you oppose the conclusion. (In fact, it's not even clear to me what you think the advocated-for conclusion is.) I mean, your point seems to be that not everyone would respond to discovering that "nothing is moral and nothing is right; that everything is permissible and nothing is forbidden" in the same way, either as individuals or as collectives. And I agree with that, but I don't see how it relates to any claims made by the post you reply to. Taking another stab at clarifying your objections might be worthwhile, if only to get clearer in your own mind about what you believe and what you expect.
    I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I'm not sure how exactly the author intends that to work out. But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked - what would happen is someone (or everyone?) was convinced their moral beliefs were wrong - was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not "objecting" to anything, as the article didn't actually make any positive claims. I might as well clarify and support myself by listing all the variations on the question possible. (1) What would you personally do if you had no moral beliefs? (2) What would you personally do if you believed in (some form of) moral eliminativism - e.g. that nothing is right or wrong? (3) What would you personally do if you were convinced your moral beliefs were wrong? What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them? What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?
    It's vague in an additional way: you interpreted it to mean "what would you do if you were convinced that your moral beliefs were wrong". But I think Eliezer was asking "what would you do if your moral beliefs actually were wrong and you were aware of that." That has its own problem. It's like asking "if someone could prove that creationism was true and evolution isn't, would you agree that scientists are closed-minded in rejecting it?" A hypothetical world in which creationism was true wouldn't be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it's a coherent concept at all--it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.
    Very true. I didn't get that from reading the article at first, but now I'm getting that vibe. I guess the more charitable reading is 'what would you do if you were convinced that your moral beliefs were wrong' or one of my variations, because you rightly point out that 'what would you do if your moral beliefs actually were wrong and you were aware of that' is an exceedingly presumptuous question.
    For my own part, I don't have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we're engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is "sure" and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.
    Thanks for clarifying.
    Consulting your intuition in a matter of descriptive questions should be done with caution. (But even then, it's not forbidden or even really discouraged, since intuition can offer valuable--if non-rigorous--insights.) Using your intuition when confronting normative or prescriptive problems, on the other hand, is perfectly fine, because there's no "should" without an intuition about what "should" be. (Unless, of course, you think that normative problems are also descriptive, in which case you believe in objective morality, which has its own problems.)

    The existence of objective moral values seems to have been a topic in the discussion below. I would like to state my view on the matter, since it connects to the original article. I define objective moral values as moral values that exist independently of the existence of life.

    I do not believe that any objective moral values exist and I usually argue as follows: I ask three questions: When did objective moral values come into existence? Have we ever observed them or how can we observe them? Do we need objective moral values to explain anything that we ... (read more)

    The benefit of morality comes from the fact that brains are slow to come up with new ideas but quick to recall stored generalizations. If you can make useful rules and accurate generalizations by taking your time and considering possible hypotheticals ahead of time, then your behavior when you don't have time to be thoughtful will be based on what you want it to be based on, instead of television and things you've seen other monkeys doing.

    Objective morality is a trick that people who come up with moralities that rely on co-operation play on people who can... (read more)

    I'm not so sure of that myself. There are cases where I want others to realize that they don't need to follow their own morality. Sometimes people's morality leads them to do things that harm me. (I'm sure you can think of examples.)

    Modernized version as of 2017, of the first part of this post :

    More serious reply: depending when you encountered me, I'd be more boring in some ways, since a lot of what I spend my time doing is towards a moral end. All the things I've learned in life I learned from trying to live in a moral universe. I would never have gotten a degree, I did that virtually entirely for what I perceived to be reasons of altruism. Since I'm assuming here that everyone else will continue to live under the illusion that they are in suc... (read more)


    I would be depressed and do nothing at all, as empirically verified.

    Gotta have _some_ answer to "what is good".

    How did I reconcile this? What is the right morality when everyone's morality differs?

    Well, mine, of course. What else?

    I don't believe in objective morality in the first place.

    My moral system has only one axiom:

    Maximise your utility.

    If nothing were right, I'd still go on maximising my utility. I don't try to maximise my utility because I believe utility maxismisation is some apriori "right" thing to—I try to maximise my utility because I want to. Unless your proof changed my desires (in which case I don't know what I would do), I expect I would go on trying to maximise my utility.

    But here is a problem: how would you calculate your utility if you have no moral system? You need at least more moral axioms.
    In the absence of morality, you maximise non moral preferences. There is no proof that all preferences are moral preferences. It doesn't follow from "all morality is preferences", even if that is true.
    Well, we definitely need a good definition of Morality then. And what is moral and non moral preferences. Looks like it converges to a discussion about terminology. Trying to understand what do you have in mind I can assume that an example of non moral preferences can be something like basic human needs. But when you choose to have this as a base doesn't that become your moral principles?
    That's not impossible ... we perhaps have too many candidates, not too few.. Is that a bad thing? If you don't discuss what you mean by "morality" you might end up believing that all preferences are moral preferences, just because you've never thought about what "moral" means .

    There would actually be several changes:

    I would stop being vegan.

    I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").

    I would stop caring about Fairtrade.

    I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.

    If others are overly friendly, I would fully abuse this for my advantage.

    I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the ... (read more)

    If you know believe that nothing is right do the following:

    1. Remember that nothing is 100% true so there is a chance that this is a false assumption. 
    2. Take all candidates for Morality that future you might follow.
    3. Make a weighted sum of normalized utility functions of every M. Take a somehow calculated (need to think how) probability of you choosing a specific M as a weight. 
    4. Normalize. 
    5. Zero utility function of nothing-is-rightness will not participate as you can't normalize constant zero. 
    6. You have an utility function now. Go and work. 
    ... (read more)
    This is something I've thought about recently. Even if you cannot identify your goals, you still have to make choices. The difficult part is in determining the distribution of possible M. In the end, I think the best I've been able to do is to follow convergent instrumental goals that will maximize the probability of fulfilling any goal, regardless of the actual distribution of goals. It is necessary to let go of any ego as well, since you cannot care about yourself more than another person if you don't care about anything, now can you?
    Yeah, I think for general activities we can make a list of things that have a positive utilities for most cases. For example: 1. Always care about your health and life. It is a base of everything. You can't do much if you are sick or dead. 2. Don't do anything illegal. You can't do much if you are in prison. 3. Keep good relationships with everybody if that does not take much effort. Social status and connections are useful for almost anything. 4. Money and time is a universal currency. Try to maximize your hourly income, but leave enough space for other things from the list. 5. Keep your mind in a good shape. Mind degradation can be very fast if you don't care. And you need it for rationality. 6. Spend some time for research of the M problem. Not too much because you will lose other items from list, but enough to make progress otherwise you will spend all your life in this goal-less loop and end regretting that you never spent enough effort to break out. etc. I think this can be a very wide list.

    I think after that I would just act like I normally do, as easily, without trying to do anything better. But yes, it would definitely not be a reason for me to change my behavior, to take some kind of active action.

    I would probably end my life in that scenario. If nothing is right, and nothing is wrong, then there's simply no reason why I should care about anything, including myself.

    In the absence of morality, you can stull maximise non moral preferences
    Would you actually, though?