Thoughts on moral intuitions

The style in the following article is slightly different from most LW articles, due to the fact that I originally posted this on my blog. Some folks on #lesswrong liked it, so I thought it might be liked here as well.


Our moral reasoning is ultimately grounded in our moral intuitions: instinctive "black box" judgements of what is right and wrong. For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim. This makes things interesting when people with different moral intuitions try to debate morality with each other.

---

Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered? In an earlier age, the second statement might have been considered acceptable, while the first one would have required an explanation.

In general, people accept their favorite intuitions as given and require people to justify any intuitions which contradict those. If people have strongly left-wing intuitions, they tend to consider right-wing intuitions arbitrary and unacceptable, while considering left-wing intuitions so obvious as to not need any explanation. And vice versa.

Of course, you will notice that in some cultures specific moral intuitions tend to dominate, while other intuitions dominate in other cultures. People tend to pick up the moral intuitions of their environment: some claims go so strongly against the prevailing moral intuitions of my social environment that if I were to even hypothetically raise the possibility of them being correct, I would be loudly condemned and feel bad for even thinking that way. (Related: Paul Graham's What you can't say.) "Culture" here is to be understood as being considerably more fine-grained than just "the culture in Finland" or the "culture in India" - there are countless of subcultures even within a single country.

---

Social psychologists distinguish between two kinds of moral rules: ones which people consider absolute, and ones which people consider to be social conventions. For example, if a group of people all bullied and picked on one of them, this would usually be considered wrong, even if everyone in the group (including the bullied person) thought it was okay. But if there's a rule that you should wear a specific kind of clothing while at work, then it's considered okay not to wear those clothes if you get special permission from your boss, or if you switch to another job without that rule.

The funny thing is that many people don't realize that the distinction of which is which is by itself a moral intuition which varies from people to people, and from culture to culture. Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes. At the time, moral psychology had mistakenly thought that "moving on" to a conception of right and wrong that was only grounded in concrete harms would be the way that children's morality naturally develops, and that children discover morality by themselves instead of learning it from others.

So moral psychologists had mistakenly been thinking about some moral intuitions as absolute instead of relative. But we can hardly blame them, for it's common to fail to notice that the distinction between "social convention" and "moral fact" is variable. Sometimes this is probably done for purpose, for rhetorical reasons - it's a much more convincing speech if you can appeal to ultimate moral truths rather than to social conventions. But just as often people simply don't seem to realize the distinction.

(Note to international readers: I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings. I apologize profusely to my European readers for this terrible misuse of language and for not using the correct terminology like God intended it to be used.)

For example, social conservatives sometimes complain that liberals are pushing their morality on them, by requiring things such as not condemning homosexuality. To liberals, this is obviously absurd - nobody is saying that the conservatives should be gay, people are just saying that people shouldn’t be denied equal rights simply because of their sexual orientation. From the liberal point of view, it is the conservatives who are pushing their beliefs on others, not vice versa.

But let's contrast "oppressing gays" to "banning polluting factories". Few liberals would be willing to accept the claim that if somebody wants to build a factory that causes a lot of harm to the environment, he should be allowed to do so, and to ban him from doing it would be to push the liberal ideals on the factory-owner. They might, however, protest that to prevent them from banning the factory would be pushing (e.g.) pro-capitalism ideals on them. So, in other words:

Conservatives want to prevent people from being gay. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Liberals want to prevent people from polluting their environment. They think that this just means upholding morality. They think that if somebody wants to prevent them from doing so, that somebody is pushing their own ideals on them.

Now my liberal readers (do I even have any socially conservative readers?) will no doubt be rushing to point out the differences in these two examples. Most obviously the fact that pollution hurts other people than just the factory owner, like people on their nearby summer cottages who like seeing nature in a pristine and pure state, so it's justified to do something about it. But conservatives might also argue that openly gay behavior encourages being openly gay, and that this hurts those in nearby suburbs who like seeing people act properly, so it's justified to do something about it.

It's easy to say that "anything that doesn't harm others should be allowed", but it's much harder to rigorously define harm, and liberals and conservatives differ in when they think it's okay to cause somebody else harm. And even this is probably conceding too much to the liberal point of view, as it accepts a position where the morality of an act is judged primarily in the form of the harms it causes. Some conservatives would be likely to argue that homosexuality just is wrong, the way that killing somebody just is wrong.

My point isn't that we should accept the conservative argument. Of course we should reject it - my liberal moral intuitions say so. But we can't in all honestly claim an objective moral high ground. If we are to be honest to ourselves, we will accept that yes, we are pushing our moral beliefs on them - just as they are pushing their moral beliefs on us. And we will hope that our moral beliefs win.

Here's another example of "failing to notice the subjectivity of what counts as social convention". Many people are annoyed by aggressive vegetarians, who think anyone who eats meat is a bad person, or by religious people who are actively trying to convert others. People often say that it's fine to be vegetarian or religious if that's what you like, but you shouldn't push your ideology to others and require them to act the same.

Compare this to saying that it's fine to refuse to send Jews to concentration camps, or to let people die in horrible ways when they could have been saved, but you shouldn't push your ideology to others and require them to act the same. I expect that would sound absurd to most of us. But if you accept a certain vegetarian point of view, then killing animals for food is exactly equivalent to the Holocaust. And if you accept a certain religious view saying that unconverted people will go to Hell for an eternity, then not trying to convert them is even worse than letting people die in horrible ways. To say that these groups shouldn't push their morality to others is to already push your own ideology - which says that decisions about what to eat and what to believe are just social conventions, while decisions about whether to kill humans and save lives are moral facts - on them.

So what use is there in debating morality, if we have so divergent moral intuitions? In some cases, people have such widely differing intuitions that there is no point. In other cases, their intuitions are similar enough that they can find common ground, and in that case discussion can be useful. Intuitions can clearly be affected by words, and sometimes people do shift their intuitions as a result of having debated them. But this usually requires appealing to, or at least starting out from, some moral intuition that they already accept. There are inferential distances involved in moral claims, just as there are inferential distances involved in factual claims.

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions. Many liberals probably don't want to admit to themselves that this is what we should do, in order to beat the conservatives - it goes so badly against the liberal rhetoric. It would be much nicer to pretend that we are simply letting everyone live the way they want to, and that we are fighting to defend everyone's right for that.

But it would be more honest to admit that we actually want to let everyone live the way they want to, as long as they don't things we consider "really wrong", such as discriminating against gays. And that in this regard we're no different from the conservatives, who would likewise let everyone live the way they wanted to, as long as they don't do things the conservatives consider "really wrong".

Of course, whether or not you'll want to be that honest depends on what your moral intuitions have to say about honesty.

199 comments, sorted by
magical algorithm
Highlighting new comments since Today at 8:54 PM
Select new highlight date
Moderation Guidelines: expand_more

For example, most people would think that needlessly hurting somebody else is wrong, just because. The claim doesn't need further elaboration, and in fact the reasons for it can't be explained, though people can and do construct elaborate rationalizations for why everyone should accept the claim.

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered. For example, most people think everything "just because", and further elaboration is just confabulation unless you do something unusual.

Thinking that morality is a specialized domain (a separate magisterium?) leads to the idea of "debating morality" as though the actual real communication events that acquire that label are like other debates except about the specialized domain: engaged in for similar purposes, with similar actual end points, resolved according to similar rhetorical patterns, and so on. Compare and contrast variations on the terms: "ethical debates", "political debates", "scientific debates", "morality conversations", "morality dialogues", "political dialogues", etc. Imagine the halo of all such terms, and the wider halo of all communication events that match anything in the halo of terms, and then imagine running a clustering algorithm on those communication events to see if they are even distinct things, and if so what the real differences are.

I don't want to say "Boo!" here too much. I'm friendly to the essay. And given your starting assumptions it does pretty much lead to the open minded interpretation of moral debates you derived. I tend to like people who go a little bit meta on those communication events more then people who just participate in them by blind reflex, but I think that going meta on those communication events a lot (with tape recorders and statistics and hypothesis testing and a research budget and so on) would reveal a lot of really useful theory. You linked to Haidt... some of this research is being done. I suspect more would be worthwhile :-)

Edited to add: And I bet the researcher's "moral debating" performance and moral conclusions would themselves be very interesting objects of study. Imagine being a fly on the wall while Haidt, Drescher, and Lakoff tried to genuinely aumann updated on political issues of the day.

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered

I'm not entirely sure what you mean, or perhaps you use "dissolving" in a different sense from how I understand it. I thought that dissolving a question meant taking a previously mysterious and unanswerable question and providing such an explanation that there's no longer any question to be asked. But if there is a mysterious and unanswerable question here, I'm not sure of what it is.

Potential questions this essay could have been written to answer, that might deserve to be dissolved rather than answered directly:

  • How does moral reasoning work (and what are the implications)?

  • How do moral debates find ground in moral feelings (and what are the implications)?

  • Where does the motivational force attributed to pro-social intrinsic values come from (and what are the implications)?

I'm currently reading a book called Braintrust: What Neuroscience Tells Us about Morality that frames the problem exactly like that. It's by Patricia Churchland. The view that she defends is that moral decision are based on constraint satisfaction, just like a lot of other decisions processes.

For what it's worth, I'd bet that your third question will be answered more or less directly, without dissolution. See Wix's reply for a step in that direction.

You're probably right. In some sense I just re-stated the same question a few times, dissolving more at each step :-)

Still not sure what you mean: questions one and two seem interesting but outside the scope of my essay, and I'm not sure I understand the third one. You said in your original comment that

I think this is a folk theory about how "moral intuitions" work, and I don't think that it is true, in the sense that it is a naive answer to a naive question that should have been dissolved rather than answered.

...but I don't think I really answered any of those three questions in my post.

To be fair, this post does point out a reason why debating morality is different from debating most other subjects (using different words from mine): people have very different priors on morality, and unlike in, say, physics, these priors can't be rebutted by observing the universe. Reaching an agreement in morality is therefore often much harder than in other subjects, if an agreement even can be reached.

Why do modern-day liberals (for example) generally consider it okay to say "I think everyone should be happy" without offering an explanation, but not okay to say "I think I should be free to keep slaves", regardless of the explanation offered?

"I think everyone should be happy" is an expression of a terminal value. Slavery is not a typically positive terminal value, so if you terminally value slavery you would have to say something like "I like the idea of slavery itself"; if you just say "I like slavery" people will think you have some justification in terms of other terminal values (e.g. slavery -> economics -> happiness).

So, to say you like slavery implies you have some justification for it as an instrumental value. Such justifications are generally considered to be incorrect for typical terminal values and so, the "liberals" could legitimately consider you to be factually incorrect.

So, to say you like slavery implies you have some justification for it as an instrumental value.

Well, let's ask some folks who actually did like slavery, and fought for it.

From the Texas Declaration of Secession, adopted February 2, 1861:

[T]he servitude of the African race, as existing in these States, is mutually beneficial to both bond and free, and is abundantly authorized and justified by the experience of mankind, and the revealed will of the Almighty Creator, as recognized by all Christian nations [...]

So at least some people who strongly believed that slavery was moral, claimed to hold this belief on the basis of (what they believed to be) both consequential and divine-command morality.

It's not at all obvious if they really believed it. People say stuff they don't believe all the time.

As I side note, I'd like to say I'd imagine nearly all political beliefs throughout history have had people citing every imaginable form of ethics as justifications, and furthermore without even distinguishing between them. From what I understand the vast majority of people don't even realize there's a distinction (I myself didn't know about non-consequentalist ideas until about 6 months ago, actually).

BTW, I would say that an argument about "the freedom to own slaves" is essentially an argument that slavery being allowed is a terminal value, although I'd doubt anyone would argue that owning of slaves is itself a terminal value.

That seems like a valid distinction, but what makes you think that it is actually the distinction that motivates the difference in reactions?

There's a theory of ethics I seem to follow, but don't know the name of. Can someone refer me to existing descriptions?

The basic idea is to restrict the scope where the theory is valid. Many other theories fail (only in my personal view, obviously) by trying to solve universal problems: does my theory choose the best possible universe? How do I want everyone to behave? If everyone followed my theory, would that be a good or a stable world? Solving under these constraints can lead people to some pretty repugnant conclusions, as well as people rejecting otherwise good theories because they aren't universally valid.

By examining the rules I actually seem to follow, I am led to a more narrow theory. It doesn't tell me how to choose a whole universe from the realm of possibility - so it's not suitable for a superhuman AI to follow. But that makes it easier to decide what I personally should do.

Instead of having to decide whether democracy or autocracy is in some grand sense better, I can just estimate the marginal results of my own vote in the coming elections. Instead of figuring out how to maximize everyone's happiness, and fall into the traps of utilitarianism and its alternatives, I take advantage of the fact I am only one person - and maximize the happiness of myself and others near me, which is much easier.

Similarly, I don't have to worry about what would happen if everyone was as selfish as I was, because I can't affect other people's selfishness significantly enough for that to be a serious problem. Instead, I just need to consider the optimal degree of my own selfishness, given how other people in fact behave.

This doesn't mean I can't or don't take into account other people's welfare. I do, because I care about others. But I can accept that this is just a fact about the universe, produced by evolution and culture and other historical reasons, and that if I didn't feel a concern for others then I wouldn't act to benefit them. I don't need to invent a grand theory of how cooperating agents win, or how my morality is somehow objectively inferior and I should want to take a pill to modify my moral intuitions.

A brief statement of my approach might be: I'm not going to change my rules of ethics to win in dilemmas I don't expect to actually encounter, if these changes would make me perform less well in everyday situations. I don't want to be vulnerable to ethical-rules Pascal's mugging, so to speak.

There seems to be a parallel here, with the concepts of rationality and bounded rationality. Rational decision-making needs to solve problems like Newcomb's Dilemma, Pascal's Mugging, acausal outside-the-lightcone one-shot cooperation, and the trillionth digit of pi being odd with probability .5 when lacking logical omniscience. In contrast, bounded rationality recognises that these things are outside the scope, and concerns itself with being correct within its bounds.

So perhaps you could adopt the name 'bounded morality'?

http://blog.muflax.com/morality/non-local-metaethics/ I really like Muflax's post on this topic. For practical purposes, morality needs to be calculable.

Thanks! Muflax comes to this conclusion in that post:

your moral theories better be local, or you’re screwed.

I agree that local theories are better than nonlocal ones - although "local" is in some degree relative; local theories with a large "locality" may be acceptable. This isn't specific to moral theories, it applies to all decision algorithms.

This doesn't directly address my position that theories that only tell you what to do in some cases, but do cover the cases likely to occur to you personally, are valid and useful.

Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find things like "not wearing a uniform to school" to be violations of social convention, lower classes in both countries were likely to find them violations of absolute moral codes.

Does he? The data in the source disagree (tables on 619-620). I haven't read all the text of the source, but it gives the uniform as the prototypical example of a custom and seems to say that it did work out that way. 40% of low SES adults in Recife (but not Porto Alegre) did claim it universal, but that's less than on any of the interesting examples. (Children everywhere showed less class-sensitivity than adults.)


Just to be clear, the description of the results of the experiment is correct, just mixing up the control example with the experimental example.

Thanks, I edited the sentence to be clearer on that: "...that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes."

That's a fun result.

Years ago, I had a "spiritual person" telling me about how god could help me if I prayed to him. Wishing to make a point by metaphor, I told him "it seems to me that god is just santa clause for grown-ups." "Yes," he responded, "santa clause gives kids what they want, god gives you what you need."

If only clever repartee established truth, then Stephen Colbert would be the last president we would ever need.

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?

It isn't going to think the CEV is an absolute morality - it'll just keep doing what it is programmed to do because that is what it does. If the programming is correct it'll keep implementing CEV. If it was incorrect then we'll probably all die.

The relevance to 'absolute morality' here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.

I have an impression that most of the explicit thinking about "morality" gets sabotaged by conditioning. The type of thought that allows you to eat the last piece of cake is associated with eating cake, the type of thought that leads to sense of guilt is associated with guilt.

Subsequently a great deal of self proclaimed systems of morality are produced in such a manner that they are much too ill defined to be used to determine the correct actions , and are only usable for rationalization (utilitarianisms, i am looking at you).

Meanwhile, there is an objective scale: how effective are the rules for peer to peer cooperation (intellectual and other); and for the most part the moralities we find entirely reprehensible are also least productive. There is no relativism in the jungle. No survival relativism, no moral relativism. And the morality as practiced gets produced by selection on this criteria.

If you want to know if you should transplant organs out of 1 healthy person who was doing routine check up, into 10 people who will otherwise die, against healthy person's will - well, the sort of societies who just cut up the healthy person and transplant end up with hardly anyone ever going to check ups. The answer is clear if you actually want the answer what you should do (when doing something for sake of everyone). Unfortunately, when people think of morality, what results is a product of lifelong history of conditioning that includes multiple small misdemeanors with associated rewards, and the guilt that resulted from thinking too clearly, and the pleasure that resulted from thinking sloppy and grand. People don't think along the lines of what is the best action; people think along the lines of what type of thought was most self serving, and the one where ends justify means is usually the most self serving (when coupled with rationalization).

Unfortunately, when people think of morality, what results is a product of lifelong history of conditioning that includes multiple small misdemeanors with associated rewards, and the guilt that resulted from thinking too clearly, and the pleasure that resulted from thinking sloppy and grand. People don't think along the lines of what is the best action; people think along the lines of what type of thought was most self serving, and the one where ends justify means is usually the most self serving (when coupled with rationalization).

Can you clarify this/give some concrete examples?

Morals are significantly restrictive and influence personal pleasure (to the point that thinking about your own action produces guilt, a pain-like feeling, and the morals stand in the way of getting what you want).

Subsequently the thought is subject to reward/punishment conditioning.

If you rationalize why you should have more cake than the other, you get cake, which is reward, if you think too clearly about your ill-doings, you are hurt by feeling of guilt; if you engage in particular form of thought whereby you do not ensure correctness of the reasoning and do not note the ways how your argument may fail (implicit assumptions etc) you can easily rationalize away the things you did wrong.

Basically, you are being conditioned to feel good about bad approach to reasoning - where you make huge jumps, where you don't note the assumptions you make, where you just make invalid assumption, where you don't search for faults, etc., and feel bad about good approach to reasoning. Your very thought process is being trained to be sloppy and broken, with only very superficial resemblance to the logic - only sufficient resemblance that the guilt circuit won't be triggered.

There is some minor conditioning from the situations where you received some external punishment or reward, but those are too uncommon and too inconsistent, and the reward/punishment is too delayed, and at the very best those would condition mere avoidance of being caught.

Basically, you are being conditioned to feel good about bad approach to reasoning - where you make huge jumps, where you don't note the assumptions you make, where you just make invalid assumption, where you don't search for faults, etc., and feel bad about good approach to reasoning.

My initial response to this was "that seems completely untrue," so I decided to hunt for examples. I think you're right, because I was able to come up with an example of myself doing this, namely downloading music and movies for free from the Internet. I do consider this kind-of-vaguely-like-stealing, but the "kind-of-vaguely" part is a good indication that my thinking is deliberately fuzzy in this area.

When I think about it, I don't know why–I don't consume enough entertainment materials that paying for it would be a significant pull on my finances, and I'm hardly financially strapped. I think it's because the usual strong positive reinforcement I would get for knowing I was "doing the right thing" despite wanting Thing X really badly is outweighed by the knowledge that several of my friends would make fun of me for paying for stuff on iTunes. Which...if I think about it...is also a pretty selfish reason!

You may just have convinced me that I should start paying for my music and movies, as a way of training my moral thinking to be less "sloppy"!

You may just have convinced me that I should start paying for my music and movies, as a way of training my moral thinking to be less "sloppy"!

Heh. But why did I do that? Selfish motives also (I make software for living).

I came up with another example. Consider the sunk cost issue. Suppose that you spent years working on a project that is heading nowhere, the effort was wasted, and there's a logical way to see that it is wasted effort. Any time your thought wavers in the direction of understanding that the effort was wasted, you get stab of negative emotions - particular hormones are released into bloodstream, particular pathways activate - and that is negative reinforcement for everything you've been doing including the use of mental framework that did lead you to that thought. I think LW calls something similar an 'ugh field', except the issue is that reinforcement is not so specific in it's action as to make you avoid one specific thought without also making you avoid the very method of thinking that got you there.

I think it may help in general (to combat the induced sloppiness) to do some kind of work where you are reliably negatively reinforced for being wrong or sloppy. Studying mathematics and doing the exercises correctly can be useful. (Studying without exercises doesn't even work). Software development, also. This will build a skill of what to do not to be sloppy, but won't necessarily transfer onto moral reasoning, for skill to transfer something else may be needed.

Consider the sunk cost issue. Suppose that you spent years working on a project that is heading nowhere, the effort was wasted, and there's a logical way to see that it is wasted effort. Any time your thought wavers in the direction of understanding that the effort was wasted, you get stab of negative emotions - particular hormones are released into bloodstream, particular pathways activate - and that is negative reinforcement for everything you've been doing including the use of mental framework that did lead you to that thought.

Solution: have a community where you can gain respect and status by having successfully noticed and avoided sunk cost reasoning. LW isn`t the best possible example of such a community, but a lot of the exercises done at, say, the summer minicamps in San Francisco were subsets of "get positive reinforcement for noticing Irrational Thought Pattern X in yourself, when normally various kinds of cognitive dissonance would make it tempting to sort of vaguely not notice it."

Solution: have a community where you can gain respect and status by having successfully noticed and avoided sunk cost reasoning.

This has its own failure mode.

I had read that article before. It's not something that I would consider a problem for myself...I rarely if ever abandon a project in the middle, and when I do, it's a) always been a personal project or goal that affects no one else, and b) always been something that turned out to be either a bad idea in the first place (i.e. my goal at age 14 of weighing 110 pounds...would never happen unless I actually develop an eating disorder), or important to me for the wrong reasons (going to the Olympics for swimming). Etc.

Note that this isn't any kind of argument against your point... If anything, it's my own personal failure mode of assuming everyone's brain is like mine and that their main problems are like mine.

However, I think it does count for something that nyan_sandwich posted this article, noticing a flaw in his reasoning, on LW...and got upvotes and praise.

LW is a terrible example, an attachment to bunch of people (SI) who keep sinking their effort and other people's money, and rationalizing it. Regarding noticing irrational pattern, so you notice it, get rid of it, then what? You aren't gaining some incredible powers of finding correct answer (you'll just come up with something else that's wrong). It's something you always find in cults - thought reform, unlearn what you learnt style. You don't find people sitting at the desks doing math exercises all day being ranked for being correct, being taught how to be correct, that would be school/university course, it is boring, it's no silver bullet, it takes time.

LW is a terrible example, an attachment to bunch of people (SI) who keep sinking their effort and other people's money, and rationalizing it. Regarding noticing irrational pattern, so you notice it, get rid of it, then what? You aren't gaining some incredible powers of finding correct answer.

Why are you here then? Please leave.

Why are you here then? Please leave.

Are you intentionally trying to promote evaporative cooling?

Are you intentionally trying to promote evaporative cooling?

Evaporative cooling regarding that attitude and this behavioral pattern? ABSOULTELY!

Boredom. You guys are highly unusual, have to give you that.

Might I suggest using fungibility? There are more effective ways than LW to treat boredom and desire for unusual conversation, if you pursue them separately.

I have been corrupted by the American blogosphere and literature, and will therefore be using "liberal" and "conservative" mostly to denote their American meanings.

You could use “left-wing” and “right-wing”, whose meanings (across the First World at least) are more consistent.

FWIW, I don't actually want to let everyone live the way they want to.
Ideally, I would far prefer that everyone live the way that's best for everyone.

Of course, I don't know that there is any such way-to-live, and I certainly don't know what it is, or how to cause everyone to live that way.

I might end up endorsing letting everyone live the way they want to, if I were convinced that that was the best achievable approximation of everyone living the way that's best for everyone. (IRL I'm not convinced of that.) But it would be an approximation of what I want, not what I want.

So what use is there in debating morality, if we have so divergent moral intuitions?

It's worth drawing a distinction here between debating morality and discussing it.

Roughly, I would say that the goal of debate is to net-increase among listeners their support for the position I champion, and the goal of discussion is to net-increase among listeners their understanding of the positions being discussed. In both cases, I might or might not hold any particular position, and participants in the discussion/debate are also listeners.

So. The value to me of debating moral positions is to convince listeners to align themselves with the moral positions I choose to champion. The value of debating other positions in moral terms is to convince listeners to align themselves with the other positions I choose to champion. The value to me of discussing moral positions is to learn more and to help others learn more about the various moral positions that exist.

Of course, many people respond negatively when they infer that someone is trying to get them to change their positions, and so it's often valuable when debating a topic to pretend to be discussing it instead. And, of course, if I believe that to understand my position is necessarily to support it, then I won't be able to tell the difference between debating and discussing that position.

So all of those things are sometimes called "debating morality", sometimes accurately. And debating morality is sometimes called other things.

Of course, many people respond negatively when they infer that someone is trying to get them to change their positions, and so it's often valuable when debating a topic to pretend to be discussing it instead.

That can also backfire with charges of "disingenuous".

Well, yes. I mean, it is disingenuous.
If I'm going to successfully pretend to be doing something I'm not, it helps to not get caught out.

Our moral reasoning is ultimately grounded in our moral intuitions

I don't accept the premise. Moral intuitions play a part, but the ultimate constraints come more from the nature of rational discourse and the psychology of the discoursing species. For extended arguments along these lines (well mostly the emphasized part) see Jürgen Habermas and Thomas Scanlon.

Isn't "the psychology of the discoursing species" another way of saying "moral intuitions"? Or at least, those are included in the umbrella of that term.

Yes, they're included. Well said. I believe this way of putting it, however, supports my criticism of the phrase "ultimately grounded in our moral intuitions;" the phrase is badly incomplete.

I would compare ethics to swimming in a giant tub of ice cream (all the same flavor) with the rest of humanity. Everyone has a favorite flavor which their intuitions pick for them, but the world can't fit everyone's tastes. Some flavors are acceptable deviations, but others are painfully unbearable. It only makes sense to try and fill the tub with your personal preference.

So what about the cases when the distance is too large, when the gap simply cannot be bridged? Well in those cases, we will simply have to fight to keep pushing our own moral intuitions to as many people as possible, and hope that they will end up having more influence than the unacceptable intuitions.

We're not stuck with our moral intuitions, unless we have "faith" that they're "true."

  1. Doesn't it seem odd, Kaj_Sojala—even irrational—that we should push our "moral intuitions" when that's all they are: intuitions—which don't describe any reality, which aren't intuitions about anything?

  2. We can change our "moral intuitions" rationally—although the mission isn't one of finding "truth". Our standards of personal integrity respond to our adaptive needs, and we can help change them in the interest of rational adaptation. They are not, even for us, "ultimate moral values."

and we can help change them in the interest of rational adaptation

And why should you do that?

I probably have very different sense what's moral and what isn't from the author (who claims to be American liberal), but I agree with pretty much everything the author says about meta-morality.

The author doesn't claim to be American and in fact is, as far as I know, Finnish.

Potentially "American liberal" is American-flavour liberalism, and not an American who is also a liberal.

Damn ambiguous natural languages, you may be right.

Though it should probably be noted that I used "liberal" mostly a convenient shorthand to characterize my views regarding gay rights, rather than as a characterization of my political views in general. I expect there to be a number of issues on which my views map badly to the views of the typical American liberal, though I don't actually know American politics well enough to know exactly what views those are.