Separate morality from free will

by PhilGoetz7 min read10th Apr 201185 comments

8

Ethics & MoralityFree Will
Personal Blog

[I made significant edits when moving this to the main page - so if you read it in Discussion, it's different now.  It's clearer about the distinction between two different meanings of "free", and why linking one meaning of "free" with morality implies a focus on an otherworldly soul.]

It was funny to me that many people thought Crime and Punishment was advocating outcome-based justice.  If you read the post carefully, nothing in it advocates outcome-based justice.  I only wanted to show how people think, so I could write this post.

Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality.  At best, they have just one word that composes two different concepts.  At worst, their "morality" doesn't contain any new primitive concepts at all; it's just a macro: a shorthand for a combination of other ideas.

I think - and have, for as long as I can remember - that morality is about doing the right thing.  But this is not what most people think morality is about!

Free will and morality

Kant argued that the existence of morality implies the existence of free will.  Roughly:  If you don't have free will, you can't be moral, because you can't be responsible for your actions.1

The Stanford Encyclopedia of Philosophy says: "Most philosophers suppose that the concept of free will is very closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one's action."  ("Free will" in this context refers to a mysterious philosophical phenomenological concept related to consciousness - not to whether someone pointed a gun at the agent's head.)

I was thrown for a loop when I first came across people saying that morality has something to do with free will.  If morality is about doing the right thing, then free will has nothing to do with it.  Yet we find Kant, and others, going on about how choices can be moral only if they are free.

The pervasive attitudes I described in Crime and Punishment threw me for the exact same loop.  Committing a crime is, generally, regarded as immoral.  (I am not claiming that it is immoral.  I'm talking descriptively about general beliefs.)  Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the "moral" question of whether the criminal had free will.  If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.

The only way this can make sense, is if morality does not mean doing the right thing.  I need the term "morality" to mean a set of values, so that I can talk to people about values without confusing both of us.  But Kant and company say that, without free will, implementing a set of values is not moral behavior.  For them, the question of what is moral is not merely the question of what values to choose (although that may be part of it).  So what is this morality thing?

Don't judge my body - judge my soul

My theory #1:  Most people think that being moral means acting in a way that will earn you credit with God.

When theory #1 holds, "being moral" is shorthand for "acting in your own long-term self-interest".  Which is pretty much the opposite of what we usually pretend being moral means.

(Finding a person who believes free will is needed for morality, and also that one should be moral even if neither God nor the community could observe, does not disprove that theory #1 is a valid characterization of the logic behind linking morals and free will.  The world is full of illogical people.  My impression, however, is that the people who insist that free will is needed for morality are the same people who insist that religion is needed for morality.  This makes sense, if religion is needed to provide an observer to provide credit.)

My less-catchy but more-general theory #2, which includes #1 as a special case:  Most people conceive of morality in a way that assumes soul-body duality.  This also includes people who don't believe in a God who rewards and punishes in the afterlife, but still believe in a soul that can be virtuous or unvirtuous independent of how virtuous the body it is encased in is.

When you see (philosophical) free will being made a precondition for moral behavior, it means that the speaker is not concerned with doing the right thing.  They are concerned with winning transcendent virtue points for their soul.

Moral behavior is intentional, but need not be free

I think both sides agree that morality has to do with intentions.  You can't be moral unintentionally.  That's because morality is (again, AFAIK we all agree) a property of a cognitive agent, not a property of the agent and its environment.  Something that an agent doesn't know about its environment has no impact on whether we judge that agent's actions to be moral.  Knowing the agent's intentions helps us know if this is an agent that we can expect to do the right thing in the future.  But computers, machines, even thermostats, can have intentions ascribed to them.  To decide how we should be disposed towards these agents, we don't need to worry about the phenomenological status of these intentions, or whether there are quantum doohickeys in their innards giving them free will.  Just about what they're likely to do.
If people were concerned with doing the right thing, and getting credit for it in this world, they would only need to ask about an agent's intentions.  They would care whether Jim's actions were free in the "no one pointed a gun at him and made him do it" sense, because if Joe made Jim do it, then Joe should be given the credit or blame.  But they wouldn't need to ask whether Jim's intentions were free in the "free will vs. determinism" or "free will vs. brain deficiency" sense.  Having an inoperable brain condition would not affect how we used a person's actions to predict whether they were likely to do similar things in the future - they're still going to have the brain condition.  We only change our credit assignment due to a brain condition if we are trying to assign credit to the  non-physical part of a person (their soul).
(At this point I should also mention theory #3:  Most people fail to distinguish between "done with philosophical free will" and "intentional".  They thus worry about philosophical free will when they mean to worry about intention.)2

Why we should separate the concepts of "morality" and "free will"

The majority opinion of what a word means is, by definition, the descriptively correct usage of the word.  I'm not arguing that the majority usage is descriptively wrong.  I'm arguing that it's prescriptively wrong, for these reasons:
  • It isn't  parsimonious.  It confuses the question of figuring out what values are good, and what behaviors are good, with the philosophical problem of free will.  Each of these problems is difficult enough on its own!
  • It is  inconsistent  with our other definitions.  People map questions about what is right and wrong onto questions about morality.  They will get garbage out of their thinking if that concept, internally, is about something different.  They end up believing there are no objective morals - not necessarily because they've thought it through logically, but because their conflicting definitions make them incapable of coherent thought on the subject.
  • It implies that morality is impossible without free will.  Since a lot of people on LW don't believe in free will, they would conclude that they don't believe in morality if they subscribed to Kant's view.
  • When questions of blame and credit take center stage, people lose the capacity to think about values.  This is demonstrated by some Christians who talk a lot about morality, but assume, without even noticing they're doing it, that "moral" is a macro for "God said do this".  They failed to notice that they had encoded two concepts into one word, and never got past the first concept.
For morality to be about  oughtness, so that we are able to reason about values, we need to divorce it completely from free will.  Free will is still an interesting and possibly important problem.  But we shouldn't mix it in together with the already-difficult-enough problem of what actions and values are moral.

1. I am making the most-favorable re-interpretation.  Kant's argument is worse, as it takes a nonsensical detour from morality, through rationality, back to free will.

2. This is the preferred theory under, um, Goetz's Cognitive Razor:  Prefer the explanation for someone's behavior that supposes the least internal complexity of them.

8

85 comments, sorted by Highlighting new comments since Today at 12:53 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

All I'm getting from this is "the term 'morality' is hopelessly confused."

7nhamann11yRelevant tweet: http://twitter.com/#!/vladimirnesov/status/34254933578481664 [http://twitter.com/#!/vladimirnesov/status/34254933578481664]
4David_Gerard11yBrilliant, yes. So what would be oxygen?
2XiXiDu11yIt's like people tried to arrive at universally valid sexual preferences by rationalizing away seemingly inconsistent decisions based upon unimportant body parts like breasts and penises when most people were in agreement that those parts had nothing to do with being human. And we all ought to be attracted to humans only, shouldn't we?

My current hypothesis is that most of the purpose of evolving morality is signaling that you are predictably non-defecting enough to deal with. This is not very well worked out - but it does predict that if you take it to edge cases, or build syllogisms from stated moral beliefs, or other such overextension, it'll just get weird (because the core is to project that you are a non-defecting player - that's the only bit that gets tested against the world), and I think observation shows plenty of this (e.g. 1, 2).

6TheOtherDave11yI find your ideas intriguing and wish to subscribe to your newsletter. That said, I'm not sure the evolution of morality can productively be separated from the evolution of disgust, and disgust does seem to have a non-signaling purpose.
3Perplexed11yIt certainly does. It helps to inform you who can be trusted as a coalition partner. Furthermore, if your feeling of disgust results in your being less nice toward the disgusting party, then your righteousness tends to deter disgusting behavior - at least when you are there to express disapproval. That is a signaling function, to be sure, but it is signalling directed at the target of your disgust, not at third parties.
4TheOtherDave11yAlso, if I feel disgust in situations that historically correlate with becoming ill -- for example, eating rotten food -- I'm less likely to become ill. We can be disgusted by things besides other primates, after all.
3timtyler11yMorality is also involved in punishment, signalling virtue, and manipulating the behaviour of others - so they stop doing the bad deeds that you don't like.
7David_Gerard11yCertainly. I think my central thesis is that morality is a set of cached answers to a really complicated game theory problem given initial conditions (e.g. you are in a small tribe; you are in a big city and poor; you are a comfortable Western suburbanite), some cached in your mind, some cached in your genes, so it's unsurprising that using intelligence to extrapolate from the cached answers without keeping a close eye on the game theoretical considerations of whatever the actual problem you're trying to solve is will lead to trouble.

Talking about morality causes much confusion, because most philosophers - and most people - do not have a distinct concept of morality. ...

I think - and have, for as long as I can remember - that morality is about doing the right thing. But this is not what most people think morality is about!

And more in this vein. I really dislike this post. The author proclaims that he is shocked, shocked that other people are wrong, even though he himself is right. Then he proceeds to analyze why almost everyone else got it wrong, without once trying to justify his own position using any argument other than professed astonishment that any thinking person could disagree.

Downvoted.

4Swimmer96311yI think you took this post in unnecessarily bad faith, Perplexed...unless this is an area where you've already had frustrating head-banging-on-wall discussions, in which case I understand. I did not detect any particular 'shocked-ness' in the author's explanation of how he understands morality. Okay, reading back I can see your point, but I still don't find it offensive in any way. As far as I can tell, all that he's claiming is that a) people claim morality is about one thing (doing the right thing) but they discuss it and act on it as if it's something different (the freedom to choose, or soul-karma-points). If he's right, it wouldn't be the first time that a word had multiple meanings to different people, but it would explain why morality is such a touchy subject in discussion. I read this post and thought "wow, I never noticed that before, that's interesting...that could explain a lot." My one complaint is that 'doing the right thing' is presented as atomic, as obvious, which I'm pretty sure it isn't. What paradigm do you personally use to determine 'right', Phil?
1PhilGoetz11yI'll try to reword the post to be clearer about what I'm claiming. It isn't a matter of who is "right" about what morality means. If anything, the majority is always "right" about what words mean. But that majority position has two big problems: * It makes the word useless and confusing. "Morality" then doesn't represent a real concept; it's just a way of hiding self-interest. * It rules out actually believing in values. The word "morality" is positioned so as to suck up any thoughts about what is the right thing to do, and convince the unsuspecting thinker that these thoughts are nonsense.
1Perplexed11yI really dislike this comment. It emotes claims, without offering any justification of those claims. Furthermore, I disagree with those claims. I shall now try to justify my disagreement. A definition of (explanation of) 'morality' (morality) as a convention characterizing how people should feel about actions (one's own, or other people's) is neither useless nor confusing. Defining correct moral behavior by reference to a societal consensus is no more useless or confusing than defining correct use of language by a societal consensus. Furthermore, this kind of definition has one characteristic which makes it more useful as a prescription of how to behave than is any 'stand-alone' prescription which does not invoke society. It is more useful because it contains within itself the answer to both central questions of morality or ethics. 1. Q. What is the right thing to do? A. What society says. 2. Q. Why ought I to do the right thing? A. Because if you don't, society will make your life miserable. I don't see why you claim that. Unless, that is, you have a non-standard definition of 'values'. Do you perhaps intend to be using a definition of morals and values which excludes any actions taken for pragmatic reasons? Gee, I hope you don't intend to defend that position by stating that most people share your disdain of the merely practical. If I seem overly confrontational here, I apologize. But, Phil, you really are not even trying to imagine that there might be other rational positions on these questions.
0PhilGoetz11yI don't think you're reading very carefully. That is not what I was calling useless. Do you understand why I kept talking about free will?
1Perplexed11yMaybe you are right that I'm not reading carefully enough. You called the word 'morality' useless if it were taken to have a particular meaning. I responded that the meaning in question is not useless. Yes, I see the distinction, but I don't see how that distinction matters. No I don't. Free will means entirely too many different things to too many different people. I usually fail to understand what other people mean by it. So I find it best to simply "taboo" the phrase. Or, if written in a sentence of text, I simply ignore the sentence as probably meaningless.
0PhilGoetz11yI'm objecting to the view that morality requires free will. I'm not as interested in taking a stand on how people learn morality, or whether there is such a thing as objective morality, or whether it's just a social consensus, except that I would like to use terms so that it's still possible to think about these issues. Kant's view at best confounds the problem of choosing values, and the problem of free will. At worst, it makes the problem of values impossible to think about, whether or not you believe in free will. (Perversely, focusing on whether or not your actions are pleasing to God obliterates your ability to make moral judgements.)
0Perplexed11yI think you are missing the point regarding Kant's mention of free will here. You need to consider Kant's explanation of why it is acceptable to enslave or kill animals, but unacceptable to enslave or kill human beings. Hint: it has nothing to do with 'consciousness'. His reason for excluding the possibility that entities without free will are moral agents was not simply to avoid having to participate in discussions regarding whether a bowling ball has behaved morally. Limiting morality to entities with free will has consequences in Kant's philosophy. Edit: minor change in wording.

There was a case in my local area where a teenager beat another teeanger to death with a bat. On another blog, some commenters were saying that since his brain wasn't fully developed yet (based on full brain development being attained close to 30), he shouldn't be held to adult standards (namely sentencing standards). This was troubling to me, because while I don't advocate the cruelty of our current prison system, I do worry about the message that lax sentencing sends. The commenets seem to naturally allow for adult freedom (the kids were all unsupervised... (read more)

[-][anonymous]11y 8

When you see free will being made a precondition for moral behavior, it means that the speaker is not concerned with doing the right thing. They are concerned with winning virtue points.

I think that "free will" can be understood as either itself an everyday concept, or else a philosopher's way of talking about and possibly distorting an everyday concept. The term has two components which we can talk about separately.

A "willed" act is a deliberate act, done consciously, intentionally. It is consciously chosen from among other possib... (read more)

2PhilGoetz11yThat's what I referred to as "intentional". A computer program with goals can have internal representations that comprise its intentions about its goals, even if it isn't conscious and has no free will. When I wrote, "Knowing the agent's intentions helps us know if this is an agent that we can expect to do the right thing in the future", that was saying the same thing as when you wrote, "If someone has done you some harm but it turns out they only did it because they were being coerced, then you are more likely to forgive them and not to hold it against them, than if they did it freely, e.g. out of personal malice toward you." That's not the usage of "free will" that philosophers such as Kant are talking about, when they talk about free will. When philosophers debate whether people have free will, they're not wondering whether or not people can be coerced into doing things if you point a gun at them. So, what you're saying is true, but is already incorporated into the post, and is a supplemental issue, not the main point. I thought I already made the main points you made in this comment in the OP, so it concerns me that 9 people upvoted this - I wonder what they think I was talking about? I rewrote the opening section to be more clear that I'm talking about philosophical free will. I see now how it would be misleading if you weren't assuming that context from the name "Kant".
0[anonymous]11yChecking the Wikipedia article [http://en.wikipedia.org/wiki/Free_will] on free will: That seems to be pretty close to what I wrote. So apparently the compatibilists have an idea of what free will is similar to the one I described. It's interesting that at least twice, now, you said what "free will" isn't, but you haven't said what it is. I think that nowhere do you successfully explain what free will supposedly is. The closest you come is here: That's not an explanation. It says that free will is not something, and it says that what it is, is a "mysterious philosophical phenomenological concept related to consciousness" - which tells the reader pretty much nothing. And now in your comment here, you say but you leave it at that. Again you're saying what free will supposedly is not. You don't go on to explain what the philosophers are talking about. I think that "free will" is an idea with origins in daily life which different philosophers have attempted to clarify in different ways. Some of them did, in my opinion, a good job - the compatibilists - and others did, in my opinion, a bad job - the incompatibilists. Your exposure seems to have been only to the incompatibilists. So, having learned the incompatibilist notion of free will, you apparently find yourself ill-prepared to explain the concept to anyone else, limiting yourself to saying what it is not and to saying that it is "mysterious". I take this as a clue about the incompatibilist concept of the free will.

Whether an agent is moral and whether an action is moral are fundamentally different questions, operating on different types. There are three domains in which we can ask moral questions: outcomes, actions, and agents. Whether actions are moral is about doing the right thing, as we originally thought. Whether a person or agent is moral, on the other hand, is a prediction of whether that agent will make moral decisions in the future.

An immoral decision is evidence that the agent who made it is immoral. However, there are some things that might screen off thi... (read more)

1PhilGoetz11yThey're not as different as the majority view makes them out to be. A moral agent is one that uses decision processes that systematically produce moral actions. Period. Whereas the majority view is that a moral agent is not one whose decision processes are structured to produce moral actions, but one who has a virtuous free will. A rational extension of this view would be to say that someone who has a decision process that consistently produces immoral actions can still be moral if their free will is very strong and very virtuous, and manages to counterbalance their decision process. The example above about a mind control ray has to do with changing the locus of intentionality controlling a person. It doesn't have to do with the philosophical problem of free will. Does Dr. Evil have free will? It doesn't matter, for the purposes of determining whether his cognitive processes consistently produce immoral actions.
1jimrandomh11yIt's more complicated than that, because agent-morality is a scale, not a boolean, and how morally a person acts depends on the circumstances they're placed in. So a judgment of how moral someone is must have some predictive aspect. Suppose you have agents X and Y, and scenarios A and B. X will do good in scenario A but will do evil in scenario B, while Y will do the opposite. Now if I tell you that scenario A will happen, then you should conclude that X is a better person than Y; but if I instead tell you that scenario B will happen, then you should conclude that Y is a better person than X. I don't think "locus of intentionality" is the right way to think about this (except perhaps as a simplified model that reduces to conditioning on circumstances). In a society where mind control rays were common, but some people were immune, we would say that people who are immune are more moral than people who aren't. In the society we actually have, we say that those who refuse in the Milgram experiment are more moral, and that people who refuse to do evil under the threat of force are more moral, and I don't think a "locus of intentionality" model handles these cases cleanly.

Ultimately, your claim appears to be, "The punitive part of morality is inappropriate. It is based on free will. Therefore, free will is irrelevant to morality." I admit you don't phrase it that way, but with your only concern being lack of literal coercion and likelihood of reoffense, your sense of morality seems to be inconsistent with people's actual beliefs.

You will find very few people who will say that a soldier acting in response to PTSD deserves the exact same sentence as a sociopath acting out of a sadistic desire to kill, even if each i... (read more)

Most people think that being moral means acting in a way that will earn you credit with God.

That, but I think there's some reciprocal after-effects that also come into play. What I mean is that when you view what being moral implies with respect to one's religion, you get what you suggested -- being moral entails an increase in heaven (or whatever) being likely.

A very interesting effect I've noticed going the other way, is that religion lets you discuss morality in far, far, far more "lofty" terms that what a non-theistic individual might come... (read more)

Having an inoperable brain condition would not affect how we used a person's actions to predict whether they were likely to do similar things in the future

I've always viewed there as being a third theory of morality: People who do bad things, are more likely to do other bad things. If my friend lies to me, they're more likely to lie to me in the future. But they're also more likely to steal from me, assault me, etc..

A brain defect (such as compulsive lying) therefor needs to be accounted for - the person is likely to commit domain-specific actions like ... (read more)

I'd just like to point out a little flaw in your construction of other people's morality, and offer what I think is a better model for understanding this issue.

First, I wouldn't say that people have a morality that agrees with God. They have a God that agrees with their morality. Reading bible passages to people is unlikely to wobble their moral compass; they'll just say those no longer apply or you're taking them too literally or some such. God isn't so much of a source of morality as a post hoc rationalization of a deeper impulse.

Second, this whole syste... (read more)

0rabidchicken11yI would be interested in seeing a more fleshed out version if at all possible.

Without Kant's "nonsensical" detour through rationality, you don't understand his position at all. There is no particular agreement on what "free will" means, and Kant chose to stick fairly closely to one particular line of thought on the subject. He maintained that you're only really free when you act rationally, which means that you're only really free when you do the right thing. Kant also held that a being with the capacity for rationality should be treated as if free even if you had little reason to think they were being rationa... (read more)

8PhilGoetz11yI agree that I don't understand Kant. It's impossible to understand something that doesn't make sense. The best you can do is try to construct the most-similar argument that does make sense. The word "certainly" appears to be an attempt to compensate for a lack of a counter-argument. When I've said "A, B, A&B=>C, therefore C", responding to my argument requires you to address A, B, or A&B=>C, and not just assert "not(C)". Kant's focus on assigning credit or blame as being an essential part of morality implies that the end goal of moral behavior is not to get good outcomes, but the credit or blame assigned, as I explained at length in the post. This "morality may be concerned with doing the right thing - as a precondition - but it isn't about doing the right thing. Kant used his peculiar meaning of free will, but at the end turned around and applied it as if he had been using the definition I use in this post. If Kant truly meant that "free" means "rational", then making a long argument starting from the precept that man is rational so that he could claim at the end, "Now I have proven man is rational!" would not make any sense. And if Kant was inconsistent or incoherent, I can't be blamed for picking one possible interpretation.
1Psychohistorian11yWin.

As I see it, there are:

  1. Actions which do not lead to the best outcome for everyone
  2. Actions which need to be punished in order to lead to the best outcome for everyone
  3. (other suggestions welcome)

I have used one taboo word here: "best". But we'll assume everyone at least broadly agrees on its definition (least death and pain, most fun etc).

People can then start applying other taboo-able words, which may arbitrarily apply to one of the above concepts, or to a confused mixture of them.

  • Morality
  • The right thing
  • Intending to do the right thing
  • Shou
... (read more)

Right on. Free will is nonsense but morality is important. I see moral questions as questions that do not have a clear cut answer that can be found be consulting some rules (religious or not). We have to figure out what is the right thing to do. And we will be judged by how well we do it.

3Tiiba11y"Free will is nonsense" It's not nonsense. http://wiki.lesswrong.com/wiki/Free_will [http://wiki.lesswrong.com/wiki/Free_will] http://wiki.lesswrong.com/wiki/Free_will_(solution [http://wiki.lesswrong.com/wiki/Free_will_(solution])
1JanetK11yI have been pointed at those pieces before. I read them originally and I have re-read them not long ago. Nothing in them changes my conviction (1) that it is dangerous to communication to use the term 'free will' in any sense other than freedom from causality, (2) I do not accept a non-material brain/mind nor a non-causal thought process. Also I believe that (3) using the phrase 'determinism' in any sense other that the ability to predict is dangerous to communication, and (4) we cannot predict in any effective way the processes of our own brain/minds. Therefore free will vs determinism is not a productive argument. Both concepts are flawed. In the end, we make decisions and we are (usually) responsible for them in a moral-ethical-legal sense. And those decision are neither the result of free will or of determinism. You can believe in magical free will or redefine the phrase to avoid the magic - but I decline to do either.
2Tiiba11y"that it is dangerous to communication to use the term 'free will' in any sense other than freedom from causality" Why is that? There are many things that can keep your will from being done. Eliminating them makes your will more free. Furthermore, freedom from causality is pretty much THE most dangerous definition for free will, because it makes absolutely, positively no sense. Freedom from causality is RANDOMNESS. "Therefore free will vs determinism is not a productive argument." We don't have this argument here. We believe that free will requires determinism. You aren't free if you have no idea what the hell is about to happen.
-1wedrifid11yFYI: You can make quotes look extra cool by placing a '>' at the start of the line. More information on comment formatting can be found in the help link below the comment box.
-1Tiiba11yTango Yankee. [http://en.wikipedia.org/wiki/NATO_phonetic_alphabet]
-1Peterdjones10yDoes that mean we should stop exonerating people who did bad things under duress? (iIOW, your stipulation about FW would change the way the word is used in law). Does that mean we should stop saying that classical chaos is deterministic? (IOW, your stipulation about "deterministic" would change the way the word is used by physicists).

I believe the "free will" thing is because without it, you could talk about whether or not a rock is moral. You could just say whether or not the universe is moral.

I consider morality to be an aspect of the universe (a universe with happier people is better, even if nobody's responsible), so I don't see any importance of free will.

1rabidchicken11yI don't understand, you cannot talk about whether a rock is moral? Given that a rock appears to have no way to recieve input from the universe, create a plan to satisfy its goals, and act, I would consider a rock morally neutral - In the same way that I consider someone to be morally neutral when they fail to prevent a car from being stolen while they are in a coma in another country.
0Perplexed11yI believe you are missing Kant's point regarding free will. People have free will. Rocks don't. And that is why it makes moral sense for you to want a universe with happy people, and not a universe with happy rocks! People deserve happiness because they are morally responsible for causing happiness. Rocks take no responsibility, hence those of us who do take responsibility are under no obligation to worry about the happiness of rocks. Utilitarians of the LessWrong variety tend to think that possession of consciousness is important in determining whether some entity deserves our moral respect. Kant tended to think that possession of free will is important. As a contractarian [http://plato.stanford.edu/entries/contractarianism/] regarding morals, I lean toward Kant's position, though I would probably express the idea in different language.
3jimrandomh11yGenerally speaking, I'm uneasy about any reduction from a less-confused concept to a more-confused concept. Free will is a more confused concept than moral significance. Also, I can imagine things changing my perspective on free will that would not also change my perspective on moral significance. For example, if we interpret free will as unsolvability by rivals [http://lesswrong.com/lw/4zb/free_will_as_unsolvability_by_rivals/], then the birth of a superintelligence would cause everyone to lose their free will, but have no effect on anyone's moral significance.

A cognitive agent with intentions sounds like it's at least in the same conceptual neighborhood as free will. Perhaps free will has roughly the same role in their models of moral action as intentions do in your model.

If a tornado kills someone we don't say that it acted immorally but if a man does we do (typically). What's the difference between the man and the tornado? While the tornado was just a force of nature, it seems like there's some sense in which the man was an active agent, some way in which the man (unlike the tornado) had control of his act... (read more)

2HumanFlesh11yIf punishing tornados changed their behaviour, then we would try to punish tornados. An event appears to be intentional (chosen) when it's controlled by contingencies of reward and punishment. There are exceptions to this characterisation of will. When there is a power imbalance between those delegating rewards and punishments and those being influenced by rewards and punishments, the decision is sometimes seen as less than free, and deemed exploitation. Parents and governments are generally given more leeway with regards to power imbalances. When particular rewards have negative social consequences, they're sometimes called addictive. When particular punishments have negative social consequences, their use is sometimes called coercive and/or unjust.

That's because morality is a property of a cognitive agent, not a holistic property of the agent and its environment.

I don't understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.

If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit ... (read more)

2PhilGoetz11yI meant that we attribute morality to an agent. Suppose agent A1 makes a decision in environment E1 that I approve of morally, based on value set V. You can't come up with another environment E2, such that if A1 were in environment E2, and made the same decision using the same mental steps and having exactly the same mental representations, I will say it was immoral for A1 in environment E2 according to value set V. You can easily come up with an environment E2 where the outcome of A1's actions are bad. If you change the environment enough, you can come up with an E2 where A1's values consistently lead to bad outcomes, and so A1 "should" change its values (for some complicated and confusing value of "should"). But, if we're judging the morality of A1's behavior according to a constant set of values, then properties of the environment which are unknown to A1 will have no impact on our (or at least my) judgement of whether A1's decision was moral. A simpler way of saying all this is: Information unknown to agent A has no impact on our judgement of whether A's actions are moral.
0[anonymous]11yThis is a tricky problem. Is morality, like beauty, something that exists in the mind of the beholder? Like aesthetic judgements, it exists relative to a set of values, so probably yes.

So you (or perhaps some extrapolated version of you) would say that a thermostat in a human's house set to 65 degrees F is moral, because it does the right thing, while a thermostat set to 115 is immoral because it does the wrong thing. Meanwhile one of those free will people would say that a thermostat is neither moral nor immoral, it is just a thermostat.

The main difference seems to be the importance of "moral responsibility," which, yes, is mixed up with god, but more importantly is a key part of human emotions, mostly emotions dealing with p... (read more)

1PhilGoetz11yRight - but, that's where this post starts off. I described the view you just described in your second paragraph, and acknowledged that it's the majority view, then argued against it.
0Manfred11yI don't think you said this. Your two options for people were "Most people conceive of morality in a way that assumes soul-body duality." and "They worry about philosophical free will when they mean to worry about intention." You seem to be neglecting the possibility that "morality" exists not to refer to a clear set of things in the world, but instead to refer to an important thing that the human mind does.

If you want an alternative to the word 'morality' that means what you want 'morality' to mean, I have found good results using the phrase "right-and-wrong-ness".

Do note that this often takes a turn through intuitionism, and it can be hard to drag less-clear thinkers out of that mire.

While morality seems closely related to (a) signaling to other people that you have the same values and are trustworthy and won't defect or (b) being good to earn "points", neither of these definitions feel right to me.

I hesitate to take (a) because morality feels more like a personal, internal institution that operates for the interests of the agent. Even if the outcome is for the interests of society, and that this is some explanation for why it evolved, that doesn't seem to reflect how it works.

I feel that (b) seems to miss the point: we are... (read more)

[-][anonymous]11y 0

I think I'm on the same page with you re kant. Tell me if I've understood the other ideas you're advancing in this post:

  1. The problem of understanding morality just is the problem of understanding which actions are moral.

  2. An action is moral only if (but not if and only if) it was intended to be moral.

Did I miss the point?

But computers, machines, even thermostats, can have intentions ascribed to them

Can you spell out what you mean by this? Are intentions something a thermostat has intrinsically, or something that I can ascribe to it?

1PhilGoetz11yAsking whether a thermostat is intentions intrinsically, or whether we only ascribe intentions to them, is what I meant by asking about the phenomenological status of these intentions. If I ask whether Jim really has intentions, or whether Jim is a zombie whom I am merely ascribing intentions to, I'm really asking (I think) whether Jim has free will. If morality is just about doing the right thing, then we don't need to ask that. The free will question may still be interesting and important; but I'd like to separate it from the question of what actions are moral. I want there to be only one fundamental moral question: What is the right set of values? The "morality requires free will" viewpoint introduces an entirely different question, which I think should be its own thing.

Reading Kant (okay, mostly reading about Kant), it seemed to me that he was not even interested in the question, "What is the right thing to do?" What he was interested in was really, "How can I get into heaven?"

I don't think Kant thought about getting to the afterlife. My impression of Kant is that he was essentially agnostic about both God and the afterlife (although he considered them to be a very interrelated pair of questions) but thought it was healthier for individuals and society to believe in them.

5PhilGoetz11yI'll strike that - I didn't mean that he was obsessed with a particular story about heaven, the way Martin Luther was. I meant, more abstractly, that he saw the central question as when to give people credit for their actions.
1Perplexed11yYou don't think the two are related? I think that a pretty good case can be made that: * You should give people credit for their actions when they do the right thing. * If your own intuitions aren't sufficiently convincing at instructing you regarding "What is the right thing to do?", you can get a 'second opinion' by observing what kinds of things people receive credit for.
5PhilGoetz11yThe first question is related to the second question in ethical systems in which you get credit for doing the right things. They should still be two separate questions. In some types of Christianity, they aren't related, because there is no "right thing to do", there is only what God tells you to do. This is described as "the right thing to do", but it's what I called a macro rather than a primitive: There is no new ontological category of "right things"; you just need to learn what things God says to do.

Yet people see the practical question of whether the criminal is likely to commit the same crime again, as being in conflict with the "moral" question of whether the criminal had free will. If you have no free will, they say, you can do the wrong thing, and be moral; or you can do the right thing, and not be moral.

The only way this can make sense, is if morality does not mean doing the right thing.

"Moral" and "legal" mean different things anyway. It makes sense that someone did the legally wrong thing, but were not culpa... (read more)

This is the preferred theory under, um, Goetz's Cognitive Razor: Prefer the explanation for someone's behavior that supposes the least internal complexity of them.

This problem with Goetz's Cognitive Razor, is that humans are internally complex.

It seems like the right perspective to think about things goes something like this:

Facts about the world can be good or bad. It is good, for instance, when people are happy and healthy, and bad when they are not.

  1. It is bad that Alice fell and hit her head.
  2. It is bad that Bob, due to dizziness, stumbled and hit his head.
  3. It is bad that Carol, due to a sudden bout of violent behavior, momentarily decided to punch Dan in the head.
  4. It is bad that Erin carried out a plan over a period of weeks to punch Fred in the head.

These are all pretty much equally bad, ... (read more)

2Swimmer96311yInteresting breakdown. My interpretation is that facts about the world are interpreted as good or bad by a brain capable of feeling pain, the usual indicator that a world-state is 'bad', and pleasure, the indicator that it is 'good'. Outside of the subjective, there are facts but not values. In the subjective, there are values of good and bad. If I understand correctly what you're saying, it's that a fact having positive or negative value assigned to it by a brain (i.e. Alice falling and hitting her head) does not necessarily imply that this fact has a moral flavour attached to it by the same brain. It's not wrong that Alice fell, it's just bad...but it is wrong that Carol hit Fred. Am I reading your argument correctly?
0PhilGoetz11yWhat you're saying is true, but doesn't touch on the distinction that the post is about. The post contrasts two positions, both of which would agree with everything you just said.
0Will_Sawin11yIt's a step on the way to dissolve or pseudo-dissolve the question.
[-][anonymous]7y -4

Separating concepts is itself a moral action. Moral actions should relate to moral agents. Most of the moral agents who use these concepts aren't here on lesswrong. They include the kind of people who hear ''free will is an illusion'' from a subjectively credible source and mope around for the rest of their lives.

"What happens then when agents’ self-efficacy is undermined? It is not that their basic desires and drives are defeated. It is rather, I suggest, that they become skeptical that they can control those desires; and in the face of that skeptici... (read more)

to me, morality means not disastrously/majorly subverting another's utility function for a trivial increase in my own utility.

edit: wish the downvoters would give me some concrete objections.

2Dorikka11yDo you mean that "not disastrously/majorly subverting another's utility function for a trivial increase in my own utility" is ethical [http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/], in the sense that this is a safety measure so that you don't accidentally cause net negative utility with regard to your own utility function (as a result of limited computing power)? Or do you mean that you assign negative utility to causing someone else negative utility according to their utility function?
2nazgulnarsil11ycausing negative utility is not the same as disastrously subverting their utility function.
3Jonathan_Graehl11yIt's strange that you haven't explained what you mean by 'disastrously subverting'.
4nazgulnarsil11yslipping the pill that makes you want to kill people into gandhi's drink without his knowledge is the simplest example.
3Jonathan_Graehl11yNow I just think it's odd that you have "refraining from non-consensual modification of others' wants/values" as the sole meaning of "morality".
0wedrifid11yThe "it is strange", "I think it is odd" style of debate struck me as disingenuous.
0Jonathan_Graehl11yOkay, "stupid" if you prefer :)
1wedrifid11yBetter. :)
1Jonathan_Graehl11yI was really just annoyed at the lack of clarity in that statement. I could have just said so, in fewer words (or said nothing). Your critique was justified, and your less presumptuous "struck me as" made it easier for me to think rather than argue.
4wedrifid11yI can see why you would be. That is, after I clicked back through half a dozen comments to explore the context I could see why you would be annoyed. Until I got back here [http://lesswrong.com/lw/4xu/separate_morality_from_free_will/3vfe] the only problem with nazgulnarsil's comments was the inexcusably negligent punctuation. Exploring the intuition behind my objection, purely for the sake of curiosity, your style of questioning is something that often works in debates completely independently of merit. To use your word it presumes a state of judgment from which you can label nazgulnarsil's position 'strange' without really needing to explain directly. The metaphorical equivalent of a contemptuous sneer. Because it is a tactic that is so effective independently of merit in the context I instinctively cry 'foul'. The thing is, a bit of contempt is actually warranted in this case. Taken together with his earlier statements the effective position nazgulnarsil was taking was either inconsistent or indicative of utterly bizarre. But as I have learned the hard way more than once you can't afford to cede the intellectual high and be dismissive unless you first make sure that the whole story is clear to the casual observer within one leap. I suspect if your comment had included a link to the two comments which taken together make nazgulnarsil's position strange it would have met my vocal approval.
0TheOtherDave11yIf we're just talking about rhetoric here, I prefer "odd" to "stupid" but would prefer "wrong" or "unjustified" (depending on which one you actually mean) to either.
1zaph11yThat strikes me as a low bar. Would you disastrously subvert someone else's utility function to majorly increase yours?
-1khafra11y"Subversion" seems unspecific. Does that mean, would I go back in time and use my amazing NLP powers or whatever to convince Hitler to try art school again instead of starting a world war and putting millions into death camps? Or is this "subversion" more active and violent?
-2nazgulnarsil11yit goes both ways. those who try to disastrously subvert others as part of their utility get less moral consideration.
-2nazgulnarsil11ydepends. no hard and fast rule.http://www.youtube.com/watch?v=KiFKm6l5-vE [http://www.youtube.com/watch?v=KiFKm6l5-vE]