Related to Not for the Sake of Selfishness Alone, Crime and Punishment, and Separate morality from free will.

Here is a simple method for resolving some arguments about free will.  Not for resolving the question, mind you.  Just the arguments.

One group of people doesn't want to give people any credit for anything they do.  All good deeds are ultimately done for "selfish" reasons, where even having a goal of helping other people counts as selfish.  The quote from Lukeprog's recent article is a perfect example:

No one deserves thanks from another about something he has done for him or goodness he has done. He is either willing to get a reward from God, therefore he wanted to serve himself. Or he wanted to get a reward from people, therefore he has done that to get profit for himself. Or to be mentioned and praised by people, therefore, it is also for himself. Or due to his mercy and tenderheartedness, so he has simply done that goodness to pacify these feelings and treat himself.

- Mohammed Ibn Al-Jahm Al-Barmaki

Another group of people doesn't want to blame people for anything they do.  Criminals sometimes had criminal parents - crime was in their environment and in their genes.  Or, to take a different variety of this attitude, cultural beliefs that seem horrible to us are always justifiable within their own cultural context.

The funny thing is that these are different groups.  Both assert that people should not be given credit, or else blame, for their actions, beyond the degree of free will that they had.  Yet you rarely find the same person who will not give people credit for their good deeds unwilling to blame them for their bad deeds, or vice-versa.

When you find yourself in an argument that appears to be about free will, but is really about credit or blame, ask the person to agree that the matter applies equally to good deeds and bad deeds - however they define those terms.  This may make them lose interest in the argument - because it no longer does what they want it to do.

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 12:16 PM

That seems rather similar to the Knobe effect.

Excellent link! Short, clear, interesting, 100% relevant.

Instead, the moral character of an action’s consequences also seems to influence how non-moral aspects of the action – in this case, whether someone did something intentionally or not – are judged.

Stupid Knobe effect. Obviously the subjects' responses were an attempt to pass judgement on the CEO. In one case, he deserves no praise, but in the other he does deserve blame [or so a typical subject would presumably think]. The fact that they were forced to express their judgement of moral character through the word 'intentional', which sometimes is a 'non-moral' quality of an action, doesn't tell us anything interesting.

doesn't tell us anything interesting.

Your explanation is obviously correct; what's interesting about it is that it exists, and that's why it's 100% relevant.

I thought of this too. Also a factor: publicly giving credit to someone makes you feel obligated to them.

Also a factor: publicly giving credit to someone makes you feel obligated to them.

Does it?

I would have said that the risks are that if you praise something, you might get told it isn't good enough, and if you blame someone, you might get entangled in the consequences of punishing them.

Yes, that too, now that you mention it. Especially when it comes to praising CEOs :)

Hmm, are you interpreting the results as "boo CEOs" then?

How would you modify the experiment to return information closer to what was sought?

Hmm, are you interpreting the results as "boo CEOs" then?

I'm only interpreting the result as "boo this fictional CEO".

How would you modify the experiment to return information closer to what was sought?

Well, what Knobe is looking for is a situation where subjects make their 'is' judgements partly on the basis of their 'ought' judgements. Abstractly, we want a 'moral proposition' X and a 'factual proposition' Y such that when a subject learns X, they tend to give higher credence to Y than when they learn ¬X. Knobe takes X = "The side-effects are harmful to the environment" and Y = "The effect on the environment was intended by the CEO".

(My objection to Knobe's interpretation of his experiment can thus be summarised: "The subjects are using Y to express a moral fact, not a 'factual fact'." After all, if you asked them to explain themselves, in one case they'd say "It wasn't intentional because (i) he didn't care about the effect on the environment, only his bottom line." In the other they'd say "it was intentional because (ii) he knew about the effect and did it anyway." But surely the subjects agree on (i) and (ii) in both cases - the only thing that's changing is the meaning of the word 'intentional', so that the subjects can pass moral judgement on the CEO.)

To answer your question: I'm not sure that genuine examples of this phenomenon exist, except when the 'factual' propositions concern the future. If Y is about a past event, then I think any subject who seems to be exhibiting the Knobe effect will quickly clarify and/or correct themselves if you point it out. (Rather like if you somehow tricked someone into saying an ungrammatical sentence and then told them the error.)

[This comment is no longer endorsed by its author]Reply

I suspect the subjects are judging the morality of the CEO's actions by how likely they think he will take good and/or bad actions in the future.

Yes, a very relevant link.

Phil, could you add a link to the Knobe effect in your post to encourage discussion of that as well?

I think it's interesting, and relevant somehow; but different enough that it could be confusing.

Moral psychology in general is pretty fascinating. This book provides an overview of dozens of experiments like those that discovered the Knobe effect.

Agree - I think I first ran across it in your podcast, actually, and looked it up when this post triggered the memory of it. (Might have been the John Doris interview.)

The interesting question raised here, ISTM, is how these experiments and insights fit together - what they seem to tell us, in net, about who we are.

I'm still wondering what to think of the Knobe effect in the context of the OP's observations on debates about free will - is there just an analogy in surface features (people judge X differently according to their perception of X as "good" or "bad") or is there some deeper link between the way people respond to the notion of "intentional" and to the notion of "free will".

Have you tried doing this in an actual argument, or seen it used effectively by someone else?

Not often enough to justify my use of the word "often", so I'll change that. Sample size of 2. "Effective" as a better way of getting out of a fruitless discussion than refusing to discuss it or waiting it out; no changing of minds was observed.

Framing it as a way to resolve an argument was mostly a narrative device for making people aware of this common logical inconsistency, which I usually see in written text where I don't have the chance to argue back.

You said misanthropes and bleeding hearts. Do you only have one sample of each, or none of one?

Yet you rarely find the same person who will not give people credit for their good deeds unwilling to blame them for their bad deeds, or vice-versa.

Er, there do exist quite a few people who believe themselves to be moral relativists.

Hi, I don't know enough of philosophical jargon yet to know what being a "moral relativist" entails or how it relates to the example given here. Could you expand? thx :)

Moral relativists are people who think morality is a preference. "I prefer the absence of murder to its presence" is like "I prefer the absence of anchovies on my pizza to their presence". If cosmic rays strike your brain so that you think "Murder is good" rather than "Murder is bad", murder thereby becomes good. If I like murder and you don't, we don't disagree, we just have differing subjective preferences.

A subtler form is cultural relativism, where the defining system is not individuals, but society. So human sacrifices so the sun god are bad in San Francisco in 2011, but good in Mexico City in 1411.

thanks - that's a good, clear explanation :)

Can you point me to why that would apply to the original quote above? I've tried fitting it round the idea of blaming vs credit-giving... but I'm not sure what I'm thinking makes any sense.

Many people who think of themselves as moral relativists refuse to give any credit or assign any blame for many actions. (Few are consistent enough to avoid blaming other educated Westerners for rejecting moral relativism, but that's another story.)

Aha - now I get it. Thanks. :)

They aren't refusing to assign credit or blame - they don't believe in credit or blame.

Interesting, it had never occurred to me that there were many people holding such hypocritical attitudes. Personally I harbor both of them, but it just seems so obvious to me that one should either hold both or neither of them...

I never paid particular attention to this form of bias, so next time a discussion veers into that direction I'll ask.

Come to think of it though, does it even make sense that praise and blame should depend on the existence of "free will"? Why not give someone credit for something that in some sense s/he couldn't help doing anyway? That would at least have the effect of appreciating the desired behavior and raising the chances that it will be repeated (or emulated by others) - which is the whole point of praise anyway.

That's a serious question - do praise and blame really have anything to do with whether or not free will exists? My suspicion is that this intuitively imagined contradiction may be an expression of a very primitive model about the world that most people (myself included as of now) naturally hold, and perhaps it says something like: "If person X couldn't have influenced the good/bad outcome of a situation, then that person is neither worthy of praise nor blame".

That model makes a lot of sense if applied to manage interpersonal matters, but it seems to rub painfully against what we know about the deterministic nature of reality. The problem lies obviously with the word "influence" (or control if you will).

Most people's minds apparently don't make a distinction between the concepts of "control" and "free will". But even if I don't have free will, I still have control. I'm an active player in the flow of cause-and-effect, and even though my actions may be predetermined that doesn't mean I lack control.

I have a sense that really wrapping your mind around this issue could significantly improve your model of reality (especially when it comes to people).

Personally I have been much less inclined to judge people since I started to accept the idea that people are deterministic systems and can't help it anyway. The problem is that once you really adopt this view, you tend to think in terms of "oh well I couldn't be or have acted differently anyway" which isn't true either because there still is such a thing as control and self-control. Anyone up for the task of defining the difference?

I think this is a serious bug in the human hardware, personally I've never heard of anyone who managed to hold both "views" simultaneously in a congruent and realistic manner. (It's not like I read all that much about this topic though).

I can do whatever I want, but I can't want whatever I want. If person X couldn't have influenced the good/bad outcome of a situation had they wanted to, then that person is neither worthy of praise nor blame. Blame alters wants in people, but as it doesn't telekinetically control inanimate systems, it's a waste of effort to blame them.

Blame is sometimes a useful thing, sometimes not.

The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of "bad person" and make them "deserve" bad treatment. Consequentialists don't on a primary level want anyone to be treated badly, full stop; thus is it written: "Saddam Hussein doesn't deserve so much as a stubbed toe." But if consequentialists don't believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.

So here, at last, is a rule for which diseases we offer sympathy, and which we offer condemnation: if giving condemnation instead of sympathy decreases the incidence of the disease enough to be worth the hurt feelings, condemn; otherwise, sympathize. Though the rule is based on philosophy that the majority of the human race would disavow, it leads to intuitively correct consequences. Yelling at a cancer patient, shouting "How dare you allow your cells to divide in an uncontrolled manner like this; is that the way your mother raised you??!" will probably make the patient feel pretty awful, but it's not going to cure the cancer. Telling a lazy person "Get up and do some work, you worthless bum," very well might cure the laziness. The cancer is a biological condition immune to social influences; the laziness is a biological condition susceptible to social influences, so we try to socially influence the laziness and not the cancer.

The question "Do the obese deserve our sympathy or our condemnation," then, is asking whether condemnation is such a useful treatment for obesity that its utility outweights the disutility of hurting obese people's feelings. This question may have different answers depending on the particular obese person involved, the particular person doing the condemning, and the availability of other methods for treating the obesity...

The causal forces leading to an event could be analyzed and disentangled, and only a fraction of these forces are actions, which could have been different had people willed differently (though they could not have willed differently).

At some of the people whose willed actions were significantly behind the bad event, it makes sense to say "boo!" at a certain volume (i.e. to condemn them) to change the configurations of everyone's brains, to make the culprits and bystanders less likely to act badly in the future.

At some of the people whose willed actions were significantly behind the bad event, it does not make sense to say "boo!" to them, as that would do more harm than good.

Whether an opportunity for condemnation ought to be taken depends on the circumstances around it, including people's beliefs, even the untrue ones.

Or to be mentioned and praised by people, therefore, it is also for himself

Isn't this like saying I wont pay for grocery because al the grocer wanted was to get paid?

Anyways, my counter argument will be "I have no choice in giving credit/blame too". Of course, the reply could well be "and I have no choice in debating the idea" etc - which, I confess, can lead to some wasted time

Since you presumably don't actually believe that you have no choice in giving credit or blame, you could answer by giving your consequentialist reasons for doing so (and by implication, what it would take to get you to stop doing so), but I assume you're trying to work out what would be socially effective rather than most accurate.

I can't decide if this trick is the Lighter Side of the Dark Arts, or vice versa. Either way, it's a beauty.

[-][anonymous]13y00

It's like waving your finger in front of someone's face while saying "this is not the argument you're looking for". Obi-Wan did it, so it must be fine.