To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

Devastating news, to be sure—and no, I am not telling you this in real life.  But suppose I did tell it to you.  Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place.  Suppose I proved that all utilities equaled zero.

I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.

Would you still tip cabdrivers?  Would you cheat on your Significant Other?  If a child lay fainted on the train tracks, would you still drag them off?

Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?

Would you wear black and write gloomy poetry and denounce all altruists as fools?  But there's no reason you should do that—it's just a cached thought.

Would you stay in bed because there was no reason to get up?  What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?

Would you go on reading Overcoming Bias, and if not, what would you read instead?  Would you still try to be rational, and if not, what would you think instead?

Close your eyes, take as long as necessary to answer:

What would you do, if nothing were right?

What Would You Do Without Morality?
New Comment
186 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Did you convinve me that nothing is morally right, or that all utilities are 0.

If you convinced me that there is no moral rightness, I would be less inclined to take action to promote the things I currently consider abstract goods, but would still be moved by my desires and reactions to my immediate circumstances.

If you did persuade me that nothing has any value, I suspect that, over time, my desires would slowly convince me that things had value again.

If, 'convincing' includes an effect on my basic desires (as opposed to my inferrentially derived) then I would would not be moved to act in any cognitively mediated way (though I may still exhibit behaviors with non-cognitive causes).

0[anonymous]
Why the assumption that morality is analysable with utilities?
0ike
https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem
1[anonymous]
...it has been shown in countless experiments that people do not behave in accordance with this theorem. So what conclusions do you want to draw from this? ...you do realise there are many problems with rational choice theory right? See chapter 3 and 4 from 'Philosophy of Economics: A Contemporary Introduction' by Julian Reiss for a brief introduction to the theory's problems. If you can't get your hands on that, see lectures 4-6 from Philosophy of Economics: Theory, Methods, and Values http://jreiss.org/jreiss.org/Teaching.html for an even briefer introduction. ...what has this got to do with morality?
1ike
I'm going to take a look at the lectures you linked later. For now: Your morals are your preferences; if you say that doing A is more moral than doing B, you prefer doing A to B (barring cognitive dissonance). So if preferences can be reduced to utilities, morality can be too. In fact, you'd have to argue that the axioms don't apply to morality, and justify that position.
0[anonymous]
I highly doubt that morals are preferences, with or without what you (assumedly loosely) term cognitive dissonance. One can have morals that aren't preferences: If one is a Christian deontologist, one thinks everyone ought to follow a certain set of rules, but one needn't prefer that - one might be rather pleased that only oneself will get into heaven by the following the rules. One might believe things, events or people are morally "good" or "bad" without preferring or preferring not that thing, event or person. For instance, one might think that a person is bad without preferring that person didn't exist. One can believe one ought to do something, without wanting to do it. This is seen very often in most people. And one can obviously have preferences which aren't morals. For instance, I can prefer to eat a chocolate now without thinking I ought to do so. We should also be wary of equivocating on what we mean by "preferences". Revealed preference theory is very popular in economics, and it equates preferences with actions, which evidently stops us having preferences about anything we don't do, and thus means most of the usages of the word "preference" above are illegitimate. I think we normally mean some psychological state when we refer to a preference. For instance, I see the word used as "concious desire" pretty often.
0ike
I'm talking about personal morals here, i.e. "what should I do", which are the only ones that matter for my own decision making. For my own actions, the theorem shows that there must be some utility function that captures my decision-making, or I am irrational in some way. Even if preferences are distinct from morals, each will still be expressible by a utility function or fail some axiom. That example is one where the errors are so low that it doesn't make sense to spend time thinking about it. If you value your happiness and consider it good, then you ought to eat the chocolate, but it may represent so little utility that it uses more just to figure that out. When I say preference I mean "what state do you want the world to be in". The problem of akrasia is well known, and it means that our actions don't always express our preferences. Preferences should be over outcomes, while actions are not. An imbalance can be akrasia, or the result of a misprediction. Regardless of how you define preference, if it meets the axioms then it can be expressed as a utility function. So every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing.
0[anonymous]
Oh, so now you're just talking about personal morals. One of my examples already covered that: 'One can believe one ought to do something, without wanting to do it'. Why the presumption that utility functions capture decision-making? You acknowledge that preferences and hence utilities don't always lead to decisions. And why the assumption that not meeting the axioms of rational choice theory makes you irrational? Morality might not even be appropriately described by the axioms of rational choice theory; how can you express everyone's moral beliefs as real numbers? On the chocolate example, I can think I ought not eat the chocolate, but nevertheless prefer to eat it, and even actually eat; so your counterargument does not work. Given that you are not claiming all preferences meet the axioms - only "rational" preferences do (where's your support?) - you cannot say 'every form of preference corresponds to different utility functions, whether it's revealed, actual, or some other thing'. And again, we ought to ask ourselves whether preferences or rational preferences are actually the right sort of thing to be expressed by the axioms; can they really be expressed as real numbers?
1ike
Which axiom do you think shouldn't apply? If you can't give me an argument why not to agree with any given axiom, then why shouldn't I use them? Obviously, if I prefer X to Y, and also prefer Y to X, then I'm being incoherent and that can't be captured by a utility function. I expressly outlaw those kind of preferences. Argue for a specific form of preference that violates the axioms.
0[anonymous]
If you can't give me an argument as to why all your axioms apply, then why should I accept any of your claims? A specific form of preference that violates the axioms? Any preference which is "irrational" under those axioms, and you already acknowledged preferences of that sort existed.
3ike
I see no counterexamples to any of the axioms. If they're so wrong, you should be able to come up with a set of preferences that someone could actually support. You need to argue that those are useful in some sense. Preferring A over B and B over A doesn't follow the axioms, but I see no reason to use such systems. Is that really your position, that coherence and consistency don't matter?
0dxu
As an extremely basic example: I could prefer chocolate ice cream over vanilla ice cream, and prefer vanilla ice cream over pistachio ice cream. Under the Von Neumann-Morgenstein axioms, however, I cannot then prefer pistachio to chocolate because that would violate the transitivity axiom. You are correct that there is probably someone out there who holds all three preferences simultaneously. I would call such a person "irrational". Wouldn't you?

Ugh, sorry about the typos, I am commenting from a cell phone, and have clumsy thumbs.

First, can you clarify what you mean by "everything is permissible and nothing is forbidden"?

In my familiar world, "permissible" and "forbidden" refer to certain expected consequences. I can still choose to murder, or cheat, blaspheme, neglect to earn a living, etc; they're only forbidden in the sense of not wanting to experience the consequences.

Are you suggesting I imagine that the consequences would be different or nonexistent? Or that I would no longer have a preference about consequences? Or something else?

[-]John30

"Morality" generally refers to guidelines on one of two things:

(1). Doing good to other sentients. (2). Ensuring that the future is nice.

If you wanted to make me stop caring about (1), you could convince me that all other sentients were computer simulations who were different in kind than I was, and that there emotions were simulated according to sophisticated computer models. In that case, I would probably continue to treat sentients as peers, because things would be a lot more boring if I started thinking of them as mere NPCs.

If you wanted to ... (read more)

Well I've argued that shoulds are overrated, that wants are enough. I really can't imagine you convincing me that I don't want anything more than anything else.

[-]an30

I'd do everything that I do now. Moral realism demolished.

"Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden."

First Existential Crisis: Age 15

"Would you wear black and write gloomy poetry and denounce all altruists as fools?"

Been there, done that.

"But there's no reason you should do that - it's just a cached thought."

Realized this.

"Would you stay in bed because there was no reason to get up?"

Tried that.

"What about when you finally got hungry and stumbled into the kitchen - what would you do after you were done eating?"

Stare at the wall.

"Would you go on reading Overcoming Bias, and if not, what would you read instead?"

Shakespeare, Nitzsche

"Would you still try to be rational, and if not, what would you think instead"

No-- Came up with entire philosophy of "It doesn't matter if anything I say, do, or think is consistent with itself or each other... everything in my head has been set up by the universe- my parents ideas of right and wrong- television- paternalistic hopes of approving/forgiving/nonexistent god and his ability to grant immortality, so why should I worry about trying to put it together in any kind of sensible fashion? Let it all sort itself out...

"What would you do, if nothing were right?" What felt best.

Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good ... (read more)

I guess logically I would have to do nothing, since there would be no logical basis to perform any action. This would of course be fatal after a few days, since staying alive requires action.

(I want to emphasize this is just a hypothetical answer to a hypothetical question - I would never really just sit down and wait to die.)

[-]atorm120

If it's not what you would really do, you're not answering the question.

I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.

So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

Suppose I proved that all utilities equaled zero.

If I still feel hunger then food has an utility > 0. If I don't feel anything anymore, then I wouldn't care about anything.

So our morality is defined by our emotions. The decisions I make are a tradeoff. Do I tip the waiter? Depends on my financial situation and if I'm willing to endure the awkwardness of breaking a social convention. Yes, I've often eaten wit... (read more)

I have thought on this, and concluded that I would do nothing different. Nothing at all. I do not base my actions on what I believe to be "right" in the abstract, but upon whether I like the consequences that I forecast. The only thing that could and would change my actions is more courage.

Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)

One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.

The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.

Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.

I am already fairly well convinced of this; I am hoping against hope you have something up your sleeve to change my mind.

I had this revelation sometime back. I tried living without meaning for a week, and it turn out that not a whole lot changed. Oops?

Like many others here, I don't believe that there is anything like a moral truth that exists independently of thinking beings (or even dependently on thinking beings in anything like an objective sense), so I already live in something like that hypothetical. Thus my behavior would not be altered in the slightest.

In general, I'd go back to being an amoralist.

My-Moral-Philosophy is either as true as 2+2=4 or as true as 2+2=5, I'm not sure. or 0.0001*1>0.

If it is wrong, then it's still decent as philosophy goes, and I just won't try to use math to talk about it. Though I'd probably think more about another system I looked at, because it seems like more fun.

But just because it's what a primate wants doesn't mean it's the right answer.

@Ian C and Tiiba: Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational cho... (read more)

Unlike most of the others who've commented so far, I actually would have a very different outlook on life if you did that to me.

But I'm not sure how much it would change my behavior. A lot of the things you listed -- what to eat, what to wear, when to get up -- are already not based on right and wrong, at least for me. I do believe in right and wrong, but I don't make them the basis of everything I do.

For the more extreme things, I think a lot of it is instinct and habit. If I saw a child on the train tracks, I'd probably pull them off no matter what you... (read more)

I don't know to what extent my moral philosophy affects my behavior vs. being rationalization of what I would want to want anyway. Ignoring existential despair (I think I've gotten that out of my system, hopefully permanently) I would probably act a little more selfish, although the apparently rational thing for me to do given even total selfishness and no empathy (at least with a low discount rate and maybe a liberal definition of "self") is not very different from the apparently rational thing given my current morality.

I know that random behavior requires choices. The machine IS choosing - but because all choices are equal, the result of "max(actionList)" is implementation-dependent. "Shut down OS" is in that list, too, but "make no choice whatsoever" simply doesn't belong there.

Isn't this the movie Groundhog Day, but with certain knowledge that the world will reset daily forever? No happy ending.

I'd just get really, really bored. Studying something (learning the piano, as he does in the movie) would be the only open-ended thing you could do. Otherwise, you'd be living forever with the same set of people, and the same more-or-less limited set of possibilities.

Since my current moral system is pretty selfish and involves me doing altruistic things to make me happy, I wouldn't change a thing. At first glance it might appear that my actions should be more shortsighted since my long-term goals wouldn't matter, but my short-term goals and happiness wouldn't matter just as much. Is this thought exercise another thing that just all adds up to normality?

[-]an00

James Andrix 'Doing nothing or picking randomly are also choices, you would need a reason for them to be the correct rational choice. 'Doing nothing' in particular is the kind of thing we would design into an agent as a safe default, but 'set all motors to 0' is as much a choice as 'set all motors to 1'. Doing at random is no more correct than doing each potential option sequentially.'

Doing nothing or picking randomly are no less rationally justified than acting by some arbitrary moral system. There is no rationally justifiable way that any rational being "should" act. You can't rationally choose your utility function.

[-]an00

'You can't rationally choose your utility function.' - I'm actually excepting that Eliezer writes a post on this, it's a core thing when thinking about morality etc

Well, to start with I'd keep on doing the same thing. Just like I do if I discover that I really live in a timeless MWI platonia that is fundamentally different to what the world intuitively seems like.

But over time? Then the answer is less clear to me. Sometimes I learn things that firstly affect my world view in the abstract, then the way I personally relate to things, and finally my actions.

For example, evolution and the existence of carnivores. As I child I'd see something like a hawk tearing the wings off a little baby bird. I'd think that the ha... (read more)

I'd behave exactly the same as I do now.

What is morality anyway? It is simply intuitive game theory, that is, it's a mechanism that evolved in humans to allow them to deal with an environment where conspecifics are both potential competitors and co-operators. The only ways you could persuade me that "nothing is moral" would be (1) by killing all humans except me, or (2) by surgically removing the parts of my brain that process moral reasoning.

Eliezer, I've got a whole set of plans ready to roll, just waiting on your word that the final Proof is ready. It's going to be bloody wicked... and just plain bloody, hehe.

Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid. But of course nobody wants to publicly admit that they'd personally do more cheating, stealing, and murdering. So everyone is just responding with variants of "Of course I wouldn't do anything different. No sir, not me!"

Except apparently Shane Legg, who doesn't seem to mind the world knowing that he's just waiting for any excuse to start cheating, stealing, and murdering. :)

The post says "when you finally got hungry [...] what would you do after you were done eating?", which I take to understand that I still have desire and reason to eat. But it also asks me to imagine a proof that all utilities are zero, which confuses me because when I'm hungry, I expect a form of utility (not being hungry, which is better than being hungry) from eating. I'm probably confused on this point in some manner, though, so I'll try to answer the question the way I understand it, which is that the more abstracted/cultural/etc utilities ar... (read more)

I hope I'd hold the courage of my convictions enough to commit suicide quickly. You would have destroyed my world, so best to take myself out completely.

I believe that "nothing is right or wrong", but that doesn't affect my choices much. There is nothing inconsistent with that.

It's pretty evident to me that if you convinced me (you can't, you'd have to rewire my brain and suppress a handful of hormonal feedbacks - but suppose you did) that all utilities were 0, I'd be dead in about as long as total neglect will kill a body - a couple of days for thirst, perhaps. And in the meantime I'd be clinically comatose. No motive implies no action.

It's like asking how our world would be if "2 + 2 = 5." My answer to that would be, "but it doesn't."

So unless you can convince me that one can exist without morality, then my answer is, "but we can't exist without morality."

I suspect I am misunderstanding your question in at least a couple of different ways. Could you clarify?

I think I already believe that there's no right and wrong, and my response is to largely continue pretending that there is because it makes things easier (alternatively, I've chosen to live my life by a certain set standards, which happen to coincide with at least some versions of what others call morality --- I just don't call them "moral"). But the fact that you seem to equate proving the absence of morality with proving all utilities are zer... (read more)

Wow, there are a lot of nihilists here.

I answered on my own blog, but I guess I'm sort of with dloye at 08:54: I'd try to keep the proof a secret, just because it feels like it would be devastating to a lot of people.

It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.

Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or fal... (read more)

I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.

I think my behavior would be driven by needs alone. However, I have some doubts. Say I needed money and decided to steal. If the person I stole from needed the money more than I did and ended up hurting as a result, with or without a doctrine of wrong & right, wouldn't I still feel bad for causing someone else pain? Would I not therefore refrain from stealing from that person? Or are you saying that I would no longer react emotionally to the consequences of my actions? Are my feelings a result of a learned moral doctrine or something else?

[-]poke20

I'd do everything I do now. You can't escape your own psychology and I've already expressed my skepticism about the efficacy of moral deliberation. I'll go further and say that nobody would act any differently. Sure, after you shout in from the rooftops, maybe there will be an upsurge in crime and the demand for black nail polish for a month or so but when the dust settled nothing would have changed. People would still cringe at the sight of blood and still react to the pain of others just as they react to their own pain. People would still experience guil... (read more)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I'd do precisely the same thing I would do upon being informed that an irresistible force has just met an immovable object:

Inform the other person that they didn't know what they were talking about.

Nothing is right, you say? What a very curious position to take.

Does the fact that I'd do absolutely nothing differently mean that I'm already a nihilist?

There is no rationally justifiable way that any rational being "should" act.

How do you know?

A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or that they would ... (read more)

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

There are different ways of understanding that. To clarify, let's transplant the thought experiment. Suppose you learned that there are no elephants. This could mean various things. Two things it might mean:

1) That there are no big mammals with trunks. If you see what you once thought was an elephant stampeding in your direction, if you stay still nothing will happen to you because it is not really there. If yo... (read more)

If I were actually convinced that there is no right or wrong (very unlikely), I would probably do everything I could to keep the secret from getting out.

Even if there is no morality, my continued existence relies on everyone else believing that there is one, so that they continue to behave altruistically towards me.

[-]an10

Pablo Stafforini A brief note to the (surprisingly numerous) egoists/moral nihilists who commented so far. Can't you folks see that virtually all the reasons to be skeptical about morality are also reasons to be skeptical about practical rationality? Don't you folks realize that the argument that begins questioning whether one should care about others naturally leads to the question of whether one should care about oneself? Whenever I read commenters here proudly voicing that they are concerned with nothing but their own "persistence odds", or th... (read more)

Dynamically Linked: I suspect you have completely misrepresented the intentions of at least most of those who said they wouldn't do anything differently. Are you just trying to make a cynical joke?

I would play a bunch of video games -- not necessarily Second Life, but just anything to keep my mind occupied during the day. I would try to join some sort of recreational sports league, and I would find a job that paid me just enough money to solicit a regular supply of prostitutes.

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
I'm a physical system optimizing my environment in certain ways. I prefer some hypothetical futures to others; that's a result of my physical structure. I don't really know the algorithm I use for assigning utility, but that's because my design is pretty messed up. Nevertheless, there is an algorithm, and it's what I talk about when I use the words "right" and "wrong".
Moral rightness is fu... (read more)

Dynamically Linked said:

Seriously, most moral philosophies are against cheating, stealing, murdering, etc. I think it's safe to guess that there would be more cheating, stealing, and murdering in the world if everyone became absolutely convinced that none of these moral philosophies are valid.

That's not a safe guess at all. And in fact, is likely wrong.

You observe that (most?) moral philosophies suggest your list of sins are "wrong". But then you guess that people tend not to do these things because the moral philosophies say they are wron... (read more)

I find this question kind of funny. I already feel that "that everything is permissible and nothing is forbidden", and it isn't DEVASTATING in the least; it's liberating. I already commented in this under "Heading Towards Morality". Morals are just opinions, and justification is irrelevant. I don't need to justify that I enjoy pie or dislike country music any more than I need to justify disliking murder and enjoying sex. I think it can be jarring, certainly, to make the transition to such extreme relativism, but I would not call it devastating, necessarily.

[-]an10

The point is: even in a moralless meaningless nihilistic universe, it all adds up to normality.

Another perspective on the meaning of morality:

On one had there is morality as "those things which I want." I would join a lot of people here in saying that I think that what I want is arbitrary in that it was caused by some combination of my nature and nurture, rather than being in any fundamental way a product of my rationality. At the same time I can't deny that my morality is real, or that it governs my behavior. This is why I would call myself a moral skeptic, along the lines of Hume, rather than a nihilist. I also couldn't become an ego... (read more)

Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.

Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.

Once again, I most certainly would act differently if I thought that nothi... (read more)

Unknown: I don't think that it is morally wrong to accuse people of lying. I think it detracts from the conversation. I want the quality of the conversation to be higher, in my own estimation, therefore I object to commenters accusing others of lying. Not having a moral code does not imply that one need be perfectly fine with the world devolving into a wacky funhouse. Anything that I restrain myself from doing, would be for an aversion to its consequences, including both consequences to me and to others. I agree with you about the fallacy of projecting, and it runs both ways.

Pablo- I have not yet resolved whether I should care about creating the 'positive' singularity for or more or less this reason. Why should I, the person I am now, care about the persistence of some completely different, incomprehensible, and unsympathetic form of 'myself' that will immediately take over a few nanoseconds after it has begun... I kind of like who I am now. We die each moment and each we are reborn- why should literal death be so abhorrent? Esp. if you think you can look at the universe from outside time as if it were just another dimension of space and see all fixed in some odd sense...

Roland wrote:

.I cannot imagine myself without morality because that wouldn't be me, but another brain.

Does your laptop care if the battery is running out? Yes, it will start beeping, because it is hardwired to do so. If you removed this hardwired beeping you have removed the laptop's morality.

Morality is not a ghost in the machine, but it is defined by the machine itself.

Well put.

I'd stop being a vegetarian. Wait; I'm not a vegetarian. (Are there no vegetarians on OvBias?) But I'd stop feeling guilty about it.

I'd stop doing volunteer work and dona... (read more)

The way I frame this question is "what if I executed my personal volition extrapolating FAI, it ran, created a pretty light show, and then did nothing, and I checked over the code many times with many people who also knew the theory and we all agreed that it should have worked, then tried again with completely different code many (maybe 100 or 1000 or millions) times, sometimes extrapolating somewhat different volitions with somewhat different dynamics and each time it produced the same pretty light show and then did nothing. Lets say I have spend a ... (read more)

Wow- far too much self-realization going on here... Just to provide a data point, when I was in high school, I convinced an awkward, naive, young catholic boy who had a crush on me of just this point... He attempted suicide that day.

....

For follow up, he has been in a very happy exclusive homosexual relationship for the past three years.

Maybe I didn't do such a bad thing...

Eliezer, if I lose all my goals, I do nothing. If I lose just the moral goals, I begin using previously immoral means to reach my other goals. (It has happened several times in my life.) But your explaining won't be enough to take away my moral goals. Morality is desire conditioned by examples in childhood, not hard logic following from first principles. De-conditioning requires high stress, some really bad experience, and the older you get, the more punishment you need to change your ways.

Sebastian Hagen, people cha