Posts

Sorted by New

Wiki Contributions

Comments

To the question "Which circuits are moral?", I kind of saw that one coming. If you allow me to mirror it: How do you know which decisions involve moral judgements?

Well, I would ask whether the decision in question is one that people (including me) normally refer to as a moral decision. "Moral" is a category of meaning whose content we determine through social negotiations, produced by some combination of each person's inner shame/disgust/disapproval registers, and the views and attitudes expressed more generally throughout their society. (Those two sources of moral judgments have important interrelations, of course!) I tend to think that many decisions have a moral flavor, but certainly not all. Very few people would say that there is an ethical imperative to choose an english muffin instead of a bagel for breakfast, for instance.

"A moral action is one which you choose (== that makes you feel good) without being likely to benefit your genes."

Oh, I think a large subset of moral choices are moral precisely because they do benefit our genes -- we say that someone who is a good parent is moral, not immoral, despite the genetic advantages conferred by being a good parent. I think some common denominators are altruism (favoring tribe over self, with tribe defined at various scales), virtuous motives, prudence, and compassion. Note that these are all features that relate to our role as social animals -- you could say that morality is a conceptual outgrowth of survival strategies that rely on group action (and hence, become a way to avoid collective action problems and other examples of individual rationality that are suboptimal when viewed from the group's perspective).

That's a confusion. I was explicitly talking of "moral" circuits.

Well, that presupposes that we have some ability to distinguish between moral circuits and other circuits. To do that, you need some other criteria for what morality consists in than evolutionary imperatives, b/c all brain connections are at least partially caused by evolution. Ask yourself: what decision procedure would I articulate to justify to Eisegetes that the circuits responsible for regulating blinking, for creating feelings of hunger, or giving rise to sexual desire are, or are not, "moral circuits."

In other words, you will always be faced with the problem of showing a particular brain circuit X, which you call a "moral circuit," and having someone say, "the behavior that circuit controls/compels/mediates is not something I would describe as moral." In order to justify your claim that there are moral circuits, or that specific circuits relate to morality, you need an exogenous conception of what morality is. Or else your definitions of morality will necessarily encompass a lot of brain circuitry that very few people would call "moral."

It's Euthyphro, all over again, but with brains.

I could make your brain's implicit ordering of moral options explicit with a simple algorithm:
1. Ask for the most moral option.
2. Exclude it from the set of options.
3. While options left, goto 1.

Well, I was trying to say that I don't think we have preferences that finely-grained. To wit:

Rank the following options in order of moral preference:

1. Kill one Ugandan child, at random.
2. Kill one South African child, at random.
3. Kill one Thai child. You have to torture him horribly for three days before he dies, but his death will make the lives of his siblings better.
3.5 Kill two Thai children, in order to get money with which to treat your sick spouse.
4. Rape and murder ten children, but also donate $500 million to a charity which fights AIDS in Africa.
5. Rape 500 children.
6. Sexually molest (short of rape) 2,000 children.
7. Rape 2000 women and men.
8. Rape 4000 convicted criminals.
9. Execute 40,000 convicted criminals per year in a system with a significant, but unknowable, error rate.
10. Start a war that may, or may not, make many millions of people safer, but will certainly cause at least 100,000 excess deaths.

The problem becomes that the devil is in the details. It would be very hard to determine, as between many of these examples, which is "better" or "worse", or which is "more moral" or "less moral." Even strict utilitarians would get into trouble, because they would experience such uncertainty trying to articulate the consequences of each scenario. Honestly, I think many people, if forced, could put them in some order, but they would view that order as very arbitrary, and not necessarily something that expressed any "truth" about morality. Pressed, they would be reluctant to defend it.

Hence, I said above that people are probably indifferent between many choices in terms of whether they are "more moral" or "less moral." They won't necessarily have a preference ordering between many options, viewing them as equivalently heinous or virtuous. This makes sense if you view "moral circuitry" as made up of gradated feelings of shame/disgust/approval/pleasure. Our brain states are quantized and finite, so there are certainly a finite number of "levels" of shame or disgust that I can experience. Thus, necessarily, many states of affairs in the world will trigger those responses to an identical degree. This is the biological basis for ethical equivalence -- if two different actions produce the same response from my ethical circuitry, how can I say meaningfully that I view one or the other as more or less "moral?"

To be sure, we can disagree on how many levels of response there are. I would tend to think the number of ethical responses we can have is quite small -- we can clearly say that murder is usually worse than rape, for instance. But we have great difficulty saying whether raping a 34 year old is better or worse than raping a 35 year old. You might think that enough reflection would produce a stable preference order between those states every time. But if we make the difference between their ages something on the order of a second, I don't see how you could seriously maintain that you experience a moral preference.

Unknown, it seems like what you are doing is making a distinction between a particular action being obligatory -- you do not feel like you "ought" to torture someone -- and its outcome being preferable -- you feel like it would be better, all other things being equal, if you did torture the person.

Is that correct? If it isn't, I have trouble seeing why the g64 variant of the problem wouldn't overcome your hesitation to torture. Or are you simply stating a deontological side-constraint -- I will never torture, period, not even to save the lives of my family or the whole human race?

In any event, what a lot of people mean when asked what they "should do" or what they "ought to do" is "what am I obligated to do?" I think this disambiguation helps, because it seems as if you are now making a distinction between TORTURE being morally required (which you do not seem to believe) and its being morally virtuous (which you do seem to believe).

Is that about right?

Still haven't heard from even one proponent of TORTURE who would be willing to pick up the blowtorch themselves. Kind of casts doubt on the degree to which you really believe what you are asserting.

I mean, perhaps it is the case that although picking up the blowtorch is ethically obligatory, you are too squeamish to do what is required. But that should be overrideable by a strong enough ethical imperative. (I don't know if I would pick up the blowtorch to save the life of one stranger, for instance, but I would feel compelled to do it to save the population of New York City). So: that should be solveable, in your system, by using a bigger number of people than 3^^^3. Right? So make it a g64 (= graham's number,) of people getting dust-specked.

Will anyone on this board forthrightly assert that they would pick up the blowtorch to keep specks out of the eyes of g64 people? Not "I might do it," but "yes, I would do it," in the same sense where I can say with a lot of confidence that I would torture one individual if I was certain that doing so would save millions of lives.

And if you wouldn't, would you do it in the New York City example?

Eliezer?

Your (a): I was not talking about a universal, but of a personal scalar ordering. Somewhere inside everybody's brain there must be a mechanism that decides which of the considered options wins the competition for "most moral option of the moment".

That's a common utilitarian assumption/axiom, but I'm not sure it's true. I think for most people, analysis stops at "this action is not wrong," and potential actions are not ranked much beyond that. Thus, most people would not say that one is behaving immorally by volunteering at a soup kitchen, even if volunteering for MSF in Africa might be a more effective means of increasing the utility of other people. Your scalar ordering might work a bit better for the related, but distinct, concept of "praiseworthiness" -- but even there, I think people's intuitions are much too rough-hewn to admit of a stable scalar ordering.

To conceptualize that for you in a slightly different sense: we probably have far fewer brain states than the set of all possible actions we could hypothetically take in any given situation (once those possible actions are described in enough detail). Thus, it is simply wrong to say that we have ordered preferences over all of those possible actions -- in fact, it would be impossible to have a unique brain state correspond to all possibilities. And remember -- we are dealing here not with all possible brain states, but with all possible states of the portion of the brain which involves itself in ethical judgments.

Your (b): I view morality not as the set of rules postulated by creed X at time T but as the result of a genetically biased social learning process. Morality is expressed through it's influence on every (healthy) individual's personal utility function.

Intersting, but I think also incomplete. To see why: ask yourself whether it makes sense for someone to ask you, following G.E. Moore, the following question:

"Yes, I understand that X is a action that I am disposed to prefer/regard favorably/etc for reasons having to do with evolutionary imperatives. Nevertheless, is it right/proper/moral to do X?"

In other words, there may well be evolutionary imperatives that drive us to engage in infidelity, murder, and even rape. Does that make those actions necessarily moral? If not, your account fails to capture a significant amount of the meaning of moral language.

(8) ? [Sorry, I don't understand this one.]

Some component of ethical language is probably intended to serve prescriptive functions in social interactions. Thus, in some cases, when we say that "X is immoral" or "X is wrong" to someone proposing to engage in X, part of what we mean is simply "Do not do X." I put that one last because I think it is less important as a component of our understanding of ethical language -- typically, I think people don't actually mean (8), but rather, (8) is logically implied as a prudential corrolary of meanings 1-7.

To your voting scenario: I vote to torture the terrorist who proposes this choice to everyone. In other words, asking each one personally, "Would you rather be dust specked or have someone randomly tortured?" would be much like a terrorist demanding $1 per person (from the whole world), otherwise he will kill someone. In this case, of course, one would kill the terrorist.

So, the fact that an immoral person is forcing a choice upon you, means that there is no longer any moral significance to the choice? That makes no sense at all.

---
Unknown: Your example only has bite if you assume that moral preferences must be transitive across examples. I think you need to justify your argument that moral preferences must necessarily be immune to Dutch Books. I can see why it might be desireable for them to not be Dutch-Bookable; but not everything that is pleasant is true.

Salutator: thanks for clarifying. I would tend to think that physical facts like neural firings can be quite easily multiplied. I think the problem has less to do with the multiplying, than with the assumption that the number of neural firings is constitutive of wrongness.

Frank, I think a utility function like that is a mathematical abstraction, and nothing more. People do not, in fact, have scalar-ordered ranked preferences across every possible hypothetical outcome. They are essentially indifferent between a wide range of choices. And anyway, I'm sure that there is sufficient agreement among moral agents to permit the useful aggregation of their varied, and sometimes conflicting, notions of what is preferable into a single useful metric. And even if we could do that, I'm not sure that such a function would correspond with all (or even most) of the standard ways that we use moral language.

The statement that X is wrong can be taken to mean that X has bad consequences according to some metric. It can also mean (or be used to perform the functions of) the following variants:

(1) I do not approve of X.
(2) X makes me squeamish.
(3) Most people in [relevant group] would disapprove of X.
(4) X is not an exemplar of an action that corresponds with what I believe to be appropriate rules to live by.
(5) [Same as 4, but change reference point to social group]
(6) X is not an action that would be performed by a virtuous person operating in similar circumstances.
(7) I do not want X to occur.
(8) Do not do X.

That is probably not even an exhaustive list. Most uses of moral language probably blur the lines between a large number of these statements. Even if you want to limit the discussion to consequences, however, you have to pick a metric; if you are referring only to "bad" or "undesireable" consequences, you have to incorporate some other form of moral reasoning in order to articulate why your particular metric is constitutive or representative of what is wrong.

Hence, I think the problem with you argument is that (a) I'm not sure that there is enough agreement about morality to make a universal scalar ordering meaningful, and (b) a scalar ordering would be meaningless for many plausible variants of what morality means.

Unknown: 10 years and I would leave the lever alone, no doubt. 1 day is a very hard call; probably I would pull the lever. Most of us could get over 1 day over torture in a way that is fundamentally different from years of torture, after all.

Perhaps you can defend one punch per human being, but there must be some number of human beings for whom one punch each would outweigh torture.

As I said, I don't have that intuition. A punch is a fairly trivial harm. I doubt I would ever feel worse about a lot of people (even 5^^^^^^5) getting punched than about a single individual being tortured for a lifetime. Sorry -- I am just not very aggregative when it comes to these sorts of attitudes.

Is that "irrational?" Frankly, I'm not sure the word applies in the sense you mean. It is inconsistent with most accounts of strict utilitarianism. But I don't agree that abstract ethical theories have truth values in the sense you probably assume. It is consistent with my attitudes and preferences, and with my society's attitudes and preferences, I think. You assume that we should be able to add those attitudes up and do math with them, but I don't see why that should necessarily be the case.

I think the difference is that you are assuming (at least in a very background sort of way) that there are non-natural, mind-independent, moral facts somehow engrafted onto the structure of reality. You feel like those entities should behave like physical entities, however, in being subject to the sort of mathematical relations we have developed based upon our interactions with real-world entities (even if those relations are now used abstractly). Even if you could make a strong argument for the existence of these sorts of moral rules, however, that is a far sight from saying that they should have an internal structure that behaves in a mathematically-tidy way.

You haven't ever given reasons to think that ethical truths ought to obey mathematical rules; you've just assumed it. It's easy to miss this assumption unless you've spent some time mulling over moral ontology, but it definitely animates most of the arguments made in this thread.

In short: unless you've grappled seriously with what you mean when you talk of moral rules, you have very little basis for assuming that you should be able to do sums with them. Is 6 billion punches for everyone "worse than" 50 years of torture for one person? It certainly involves the firing of more pain neurons. But the fact that a number of pain neurons fire is just a fact about the world; it isn't the answer to a moral question, UNLESS you make a large number of assumptions. I agree that we can count neuron-firings, and do sums with them, and all other sorts of things. I just disagree that the firing of pain and pleasure neurons is the sum total of what we mean when we say "it was wrong of Fred to murder Sally."

Frank, re: #2: One can also believe option 4: that pleasure and pain have some moral significance, but do not perfectly determine moral outcomes. That is not necessarily irrational, it is not amoral, and it is not utilitarian. Indeed, I would posit that it represents the primary strand of all moral thinking and intuitions, so it is strange that it wasn't on your list.

Eisegates, is there no limit to the number of people you would subject to a punch in the face (very painful but temporary with no risk of death) in order to avoid the torture of one person? What if you personally had to do (at least some of) the punching? I agree that I might not be willing to personally commit the torture despite the terrible (aggregate) harm my refusal would bring, but I'm not proud of that fact - it seems selfish to me. And extrapolating your position seems to justify pretty terrible acts. It seems to me that the punch is equivalent to some very small amount of torture.

1. My intuition on this point is very insensitve to scale. You could put a googol of persons in the galaxy, and faced with a choice between torturing one of them and causing them all to take one shot, I'd probably choose the punches.

2. Depends how much punching I had to do. I'd happily punch a hundred people, and let others do the rest of the work, to keep one stranger from getting tortured for the rest of his life. Make it one of my loved ones at risk of torture, and I would punch people until the day I die (presumably, I would be given icepacks from time to time for my hand).

3. Extrapolating is risky business with ethical intuitions. Change the facts and my intuition might change, too. I think that, in general, ethical intuitions are highly complex products of social forces that do not reduce well to abstract moral theories -- either of the deontological or utilitarian variety. And to the extent that moral statements are cognitive, I think they are referring to these sort of sociological facts -- meaning that any abstract theory will end up incorporating a lot of error along the way.

Would you really feel selfish in the dust speck scenario? I think, at the end of the day, I'd feel pretty good about the whole thing.

Load More