Joshua Greene has a PhD thesis called The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. What is this terrible truth? The essence of this truth is that many, many people (probably most people) believe that their particular moral (and axiological) views on the world are objectively true - for example that anyone who disagrees with the statement "black people have the same value as any other human beings" has committed either an error of logic or has got some empirical fact wrong, in the same way that people who claim that the earth was created 6000 years ago are objectively wrong.
To put it another way, Greene's contention is that our entire way of talking about ethics - the very words that we use - force us into talking complete nonsense (often in a very angry way) about ethics. As a simple example, consider the use of the words in any standard ethical debate - "abortion is murder", "animal suffering is just as bad as human suffering" - these terms seem to refer to objective facts; "abortion is murder" sounds rather like "water is a solvent!". I urge readers of Less Wrong to put in the effort of reading a significant part of Greene's long thesis starting at chapter 3: Moral Psychology and Projective Error, considering the massively important repercussions he claims his ideas could have:
In this essay I argue that ordinary moral thought and language is, while very natural, highly counterproductive and that as a result we would be wise to change the way we think and talk about moral matters. First, I argue on metaphysical grounds against moral realism, the view according to which there are first order moral truths. Second, I draw on principles of moral psychology, cognitive science, and evolutionary theory to explain why moral realism appears to be true even though it is not. I then argue, based on the picture of moral psychology developed herein, that realist moral language and thought promotes misunderstanding and exacerbates conflict. I consider a number of standard views concerning the practical implications of moral anti-realism and reject them. I then sketch and defend a set of alternative revisionist proposals for improving moral discourse, chief among them the elimination of realist moral language, especially deontological language, and the promotion of an anti-realist utilitarian framework for discussing moral issues of public concern. I emphasize the importance of revising our moral practices, suggesting that our entrenched modes of moral thought may be responsible for our failure to solve a number of global social problems.
As an accessible entry point, I have decided to summarize what I consider to be Greene's most important points in this post. I hope he doesn't mind - I feel that spreading this message is sufficiently urgent to justify reproducing large chunks of his dissertation - Starting at page 142:
In the previous chapter we concluded, in spite of common sense, that moral realism is false. This raises an important question: How is it that so many people are mistaken about the nature of morality? To become comfortable with the fact that moral realism is false we need to understand how moral realism can be so wrong but feel so right. ...
The central tenet of projectivism is that the moral properties we find (or think we find) in things in the world (e.g. moral wrongness) are mind-dependent in a way that other properties, those that we’ve called “value-neutral” (e.g. solubility in water), are not. Whether or not something is soluble in water has nothing to do with human psychology. But, say projectivists, whether or not something is wrong (or “wrong”) has everything to do with human psychology....
Projectivists maintain that our encounters with the moral world are, at the very least, somewhat misleading. Projected properties tend to strike us as unprojected. They appear to be really “out there,” in a way that they, unlike typical value neutral properties, are not. ...
The respective roles of intuition and reasoning are illuminated by considering people’s reactions to the following story:
"Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decided that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love but decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?"
Haidt (2001, pg. 814) describes people’s responses to this story as follows: Most people who hear the above story immediately say that it was wrong for the siblings to make love, and they then set about searching for reasons. They point out the dangers of inbreeding, only to remember that Julie and Mark used two forms of birth control. They next try to argue that Julie and Mark could be hurt, even though the story makes it clear that no harm befell them. Eventually many people say something like
“I don’t know, I can’t explain it, I just know it’s wrong.”
This moral question is carefully designed to short-circuit the most common reason people give for judging an action to be wrong, namely harm to self or others, and in so doing it reveals something about moral psychology, at least as it operates in cases such at these. People’s moral judgments in response to the above story tend to be forceful, immediate, and produced by an unconscious process (intuition) rather than through the deliberate and effortful application of moral principles (reasoning). When asked to explain why they judged as they did, subjects typically gave reasons. Upon recognizing the flaws in those reasons, subjects typically stood by their judgments all the same, suggesting that the reasons they gave after the fact in support their judgments had little to do with the process that produced those judgments. Under ordinary circumstances reasoning comes into play after the judgment has already been reached in order to find rational support for the preordained judgment. When faced with a social demand for a verbal justification, one becomes a lawyer trying to build a case rather than a judge searching for the truth. ...
The Illusion of Rationalist Psychology (p. 197)
In Sections 3.2-3.4 I developed an explanation for why moral realism appears to be true, an explanation featuring the Humean notion of projectivism according to which we intuitively see various things in the world as possessing moral properties that they do not actually have. This explains why we tend to be realists, but it doesn’t explain, and to some extent is at odds with, the following curious fact. The social intuitionist model is counterintuitive. People tend to believe that moral judgments are produced by reasoning even though this is not the case. Why do people make this mistake? Consider, once again, the case of Mark and Julie, the siblings who decided to have sex. Many subjects, when asked to explain why Mark and Julie’s behavior is wrong, engaged in “moral dumbfounding,” bumbling efforts to supply reasons for their intuitive judgments. This need not have been so. It might have turned out that all the subjects said things like this right off the bat:
“Why do I say it’s wrong? Because it’s clearly just wrong. Isn’t that plain to see? It’s as if you’re putting a lemon in front of me and asking me why I say it’s yellow. What more is there to say?”
Perhaps some subjects did respond like this, but most did not. Instead, subjects typically felt the need to portray their responses as products of reasoning, even though they generally discovered (often with some embarrassment) that they could not easily supply adequate reasons for their judgments. On many occasions I’ve asked people to explain why they say that it’s okay to turn the trolley onto the other tracks but not okay to push someone in front of the trolley. Rarely do they begin by saying, “I don’t know why. I just have an intuition that tells me that it is.” Rather, they tend to start by spinning the sorts of theories that ethicists have devised, theories that are nevertheless notoriously difficult to defend. In my experience, it is only after a bit of moral dumbfounding that people are willing to confess that their judgments were made intuitively.
Why do people insist on giving reasons in support of judgments that were made with great confidence in the absence of reasons? I suspect it has something to do with the custom complexes in which we Westerners have been immersed since childhood. We live in a reason-giving culture. Western individuals are expected to choose their own way, and to do so for good reason. American children, for example, learn about the rational design of their public institutions; the all important “checks and balances” between the branches of government, the judicial system according to which accused individuals have a right to a trial during which they can, if they wish, plead their cases in a rational way, inevitably with the help of a legal expert whose job it is to make persuasive legal arguments, etc. Westerners learn about doctors who make diagnoses and scientists who, by means of experimentation, unlock nature’s secrets. Reasoning isn’t the only game in town, of course. The American Declaration of Independence famously declares “these truths to be self-evident,” but American children are nevertheless given numerous reasons for the decisions of their nation’s founding fathers, for example, the evils of absolute monarchy and the injustice of “taxation without representation.” When Western countries win wars they draft peace treaties explaining why they, and not their vanquished foes, were in the right and set up special courts to try their enemies in a way that makes it clear to all that they punish only with good reason. Those seeking public office make speeches explaining why they should be elected, sometimes as parts of organized debates. Some people are better at reasoning than others, but everyone knows that the best people are the ones who, when asked, can explain why they said what they said and did what they did.
With this in mind, we can imagine what might go on when a Westerner makes a typical moral judgment and is then asked to explain why he said what he said or how he arrived at that conclusion. The question is posed, and he responds intuitively. As suggested above, such intuitive responses tend to present themselves as perceptual. The subject is perhaps aware of his “gut reaction,” but he doesn’t take himself to have merely had a gut reaction. Rather, he takes himself to have detected a moral property out in the world, say, the inherent wrongness in Mark and Julie’s incestuous behavior or in shoving someone in front of a moving train. The subject is then asked to explain how he arrived at his judgment. He could say, “I don’t know. I answered intuitively,” and this answer would be the most accurate answer for nearly everyone. But this is not the answer he gives because he knows after a lifetime of living in Western culture that “I don’t know how I reached that conclusion. I just did. But I’m sure it’s right,” doesn’t sound like a very good answer. So, instead, he asks himself, “What would be a good reason for reaching this conclusion?” And then, drawing on his rich experience with reason-giving and -receiving, he says something that sounds plausible both as a causal explanation of and justification for his judgment: “It’s wrong because their children could turn out to have all kinds of diseases,” or, “Well, in the first case the other guy is, like, already involved, but in the case where you go ahead and push the guy he’s just there minding his own business.” People’s confidence that their judgments are objectively correct combined with the pressure to give a “good answer” leads people to produce these sorts of post-hoc explanations/justifications. Such explanations need not be the results of deliberate attempts at deception. The individuals who offer them may themselves believe that the reasons they’ve given after the fact were really their reasons all along, what they “really had in mind” in giving those quick responses. ...
My guess is that even among philosophers particular moral judgments are made first and reasoned out later. In my experience, philosophers are often well aware of the fact that their moral judgments are the results of intuition. As noted above, it’s commonplace among ethicists to think of their moral theories as attempts to organize pre-existing moral intuitions. The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover. For example, philosophers are as likely as anyone to think that there must be “some good reason” for why it’s okay to turn the trolley onto the other set of tracks but not okay to push the person in front of the trolley, where a “good reason,” or course, is a piece of moral theory with justificatory force and not a piece of psychological description concerning patterns in people’s emotional responses.
One might well ask: why does any of this indicate that moral propositions have no rational justification? The arguments presented here show fairly conclusively that our moral judgements are instinctive, subconscious, evolved features. Evolution gave them to us. But readers of Eliezer's material on Overcoming Bias will be well aware of the character of evolved solutions: they're guaranteed to be a mess. Why should evolution have happened to have given us exactly those moral instincts that give the same conclusions as would have been produced by (say) great moral principle X? (X = the golden rule, or X = hedonistic utilitarianism, or X = negative utilitarianism, etc).
Expecting evolved moral instincts to conform exactly to some simple unifying principle is like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description.
Now I can imagine a "from first principles" argument producing an objective morality that has some simple description - I can imagine starting from only simple facts about agenthood and deriving Kant's Golden Rule as the one objective moral truth. But I cannot seriosuly entertain the prospect of a "from first principles" argument producing the human moral mess. No way. It was this observation that finally convinced me to abandon my various attempts at objective ethics.
I agree with most of these excerpts, but I'd like to see evidence for the claim that Western culture is the main cause of our tendency to rationalize post hoc arguments for our moral intuitions. I suspect that much of it is an innate human tendency, and that Western culture just mediates which rationalizations are considered persuasive to others.
If they ran some version of the incest thought experiment in non-Western societies, that is, I predict you could get the same 'moral dumbfounding' effect; you'd just have to construct the scenario in a way that negates that culture's standard rationalizations.
Agree and furthermore suggest that this goes beyond morality itself: people make fast perceptual judgments that proceed directly from salient features to categories to inferred characteristics. Brother-sister love -> "incest" -> "wrong" in the same way that human shape -> "human" -> "mortal". The moral judgment is just one more inferred characteristic from the central category.
Minor point: I find Julie-and-Mark-like examples silly because they ask for a moral intuition about a case where the outcome is predefined. Our moral intuition makes arguments of the form "behavior X usually leads to a bad outcome, therefore X is wrong". So if the outcome is already specified, the intuition has nothing to say; nor would we expect it to, since the whole point of morality is to help you make decisions between live possibilities, so why should it have anything to say about a situation that has already happened/cannot be altered?
Or to put it another way, I'm surprised no one said something to the effect of "Julie and Mark shouldn't have had sex because at the time they did they had no way of knowing that it would turn out well, and in fact every reason to believe it would turn out very badly, based on the experiences of other incestuous siblings."
For concreteness, imagine a different story where Julie and Mark decide to play Russian roulette in their cabin (again, just for fun). They both miss the bullet, no harm results, and they never tell anyone etc. etc. So what was wrong with their actions?
I think most people would be able to handle that one very quickly. So the really interesting question is why no-one comes up with such an explanation in the incest case.
An interesting analogy. I mean, who would predict something crazy like the square of the orbital period being proportional to the cube of the orbital radius?
Obviously there's no unifying principle in all that messy moral randomness. No hidden laws, just waiting to be discovered...
Ugh. Where to start...
Yes, because evolution gave us the instincts that solved the prisoner's dilemma and made social life possible. Which is why Jonathan Haidt finds it more helpful to define morality as, rather than being about harm and fairness, something like:
Green is basically screaming bloody murder at how people stupidly conclude that incest is wrong in a case where some bad attributes of incest don't apply, and how this is part of a more general flaw involving people doing an end-run around the usual need to find rational reasons for their moral judgments.
His view is in complete ignorance of recent ground-breaking research on the nature of human morality (see above link). Basically, most secular academics think of ... (read more)
Greene and Haidt have coauthored papers together, so I would guess they are aware of each other's work!
Agreed that this is true and important. It is odd to me that so many more people accept the ideas of behavioral economics and evolutionary psychology, yet don't take the obvious leap to question whether our moral intuitions are a hard-wired module that evolved to serve our genetic interests, and thus feel like a window onto objective truth, yet are very very different from a sensory perception.
Here's an example that may help introspectively honest people, partly inspired by a blog post of PJ Eby's. Consider the social nature of guilt and shame. That is,... (read more)
Voted Down. Sorry, Roko.
I don't find Greene's arguments to be valuable or convincing. I won't defend those claims here but merely point out that this post makes it extremely inconvenient to do so properly.
I would prefer concise reconstructions of important arguments over a link to a 377 page document and some lengthy quotes, many of which simply presuppose that certain important conclusions have already been established elsewhere in the dissertation.
As an exercise for the reader demonstrating my complaint, consider what it would take to work out whether Jo... (read more)
There's been a lot of discussion about that incest question, but I don't think anyone's come out and said whether they think the scenario represents a moral transgression. I wonder what folks here think of the scenario. In fact, let's consider three scenarios:
As specified, boivbhfyl abg. V nz pncnoyr bs qvfgvathvfuvat zl crefbany fdhvpxl srryvatf sebz zl frafr bs evtug naq jebat.
The author seems to assert that this is a cultural phenomenon. I wonder if our attempts at unifying into a theory might not be instinctive, however. Would it then be so obvious that Moral Realism were false? We have an innate demand for consistency in our moral principles, that might allow us to say something like "racism i... (read more)
Ok, i skimmed that a bit because it was fairly long, but here's a few observations...
I think the default human behavior is to treat what we perceive as simply being what is out there (some people end up learning better, but most seem not to). This is true for everything we percieve, regardless of the subject matter - i.e. is nothing specific to morality.
I think it can -- sometimes -- be reasonable to stand by your intuition even if you can't reason it out. Sometimes it takes time to figure out and articulate the reasoning. I am not trying to justify obs... (read more)
I agreed but came to the opposite conclusion. Because I think that an ethics of naive moral intuition leads to worse outcomes than a fairly robust consequentialism/virtue ethics, I use the latter to trump the former.
Minor quibble, interesting info :
"like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description"
The particular example of the planet's orbit is actually one where such a simple rule exists : see the law of Titius Bode
To a first approximation, yes. But sometimes people here underestimate the importance of culture in shaping morality. See the sub-discipline of cultural psychology, e.g. Richard Shewder. Jon Haidt and Joshua Greene rightly place more emphasis on the biological basis and evolutionary origins of morality, but there is still quite a bit of room for culture.
So we have here a 'guess' about what people actually trained to think about morality might be thinking, as well as reasoning based on what people insufficiently trained in morality think.
If anything, this might serve as an argument that we need to actually treat ethics seriously, and teach it to everybody (not just philosophers).
He seems to regard intuition as though it's not a sort of perception. That seems clearly wrong.
I was amazed to note that this was being presented in a philosophy department. But then, I don't know what Princeton's department is like.
It seems inconsistent to be denying moral realism and then making claims about what sort of language we should be using.
Thank you for introducing the position of the thesis. I started reading it a couple of times, but never got very far.
It's a fine effort for correcting stupidity, but the argument given here shouldn't be carried too far either. For example, a lot of the misleading points in the above quotes can be revealed by analogizing prior with utility, as two sides of (non-objective) preference. Factual judgments are not fundamentally different from moral judgments on the subjective-objective scale, but factual judgments can often be so clear that an argument for them ... (read more)
This seems like just another example of our tendency to (badly) rationalize whatever decisions we made subconsciously. We like to think we do things for good reasons, and if we don't know the reasons we'll make some up.
Is your basic thesis here that (a) because "morals" are, for the most part, based on something that is not rational, and (b) because most people will nonetheless do their best to justify even the most irrational of their morals, (c) there is therefore no point in trying to construct a morality that is based in rationality?
That's what it sounds like, but I wanted to make sure I had it right before launching into commentary...
Today on BloggingHeads is a diavlog between Joshua Greene and Joshua Knobe: Percontations: Explaining and Appraising Moral Intuition.
As a moral nihilist and/or egoist I tend to agree with the general sentiment of this article, though I would not take the tack of saying morality needs to be reformed - it's so nonsensical and grinding it may be as possible (and more beneficial) to simply stop pretending magical rules and standards need apply.
I'm very sympathetic to Greene's views. In fact, I'm mid way through a philosophy PhD on evolution and morality myself (more at http://ockhamsbeard.wordpress.com/). However, I'd never read Greene's entire dissertation - so thanks for the link.
On his views, there's one point I'd like to raise. The reason why "people tend to believe that moral judgments are produced by reasoning even though this is not the case" goes back to the evolutionary roots of our moral intuitions.
Assuming that morality has evolved to encourage pro-social behaviour, it’s pla... (read more)
If anyone can think of a way to condense this post, i.e. cut some stuff out, then let me know. I may give it a go myself later today.
Julie and Mark would have to be good at keeping their experiment secret. If they had a good experience together, having not harmed each other nor themselves, that golden rule, the trust of experience and emotions, could anyone else know the purity in their hearts? The sexuality of the question is interesting to me. When we are with a lover in the usual ways, are we really alone with them? It can take many years of trust to melt into love, have a change of consciousness, of unity. The incest question makes me think of these siblings, they could be each... (read more)
But the intuition has to come from reason initially, no? Like, the first human ever who had thoughts about incest didn't have a heuristic obtained from his parents or the society.