Posts

Sorted by New

Wiki Contributions

Comments

Why do people seem to mean different things by "I want the pie" and "It is right that I should get the pie"? Why are the two propositions argued in different ways?

"I want the pie" is something that nobody else is affected by and thus nobody else has an interest in. "I should get the pie" is something that anybody else interested in the pie has an interest in. In this sense, the moral preferences are those that other moral beings have a stake in, those that affect other moral beings. I think some kind of a distinction like this explains the different ways we talk about and argue these two kinds of preferences. Additionally, evolution has most likely given us a pre-configured and optimized module for dealing with classes of problems involving other beings that were especially important in the environment of evolutionary adaptedness, which subjectively "feels" like an objective morality that is written into the fabric of the universe.

When and why do people change their terminal values? Do the concepts of "moral error" and "moral progress" have referents? Why would anyone want to change what they want?

I think of preferences and values as being part of something like a complex system (in the sense of http://en.wikipedia.org/wiki/Complex_system) in which all the various preferences are inter-related and in constant interaction. There may be something like a messy, tangled hierarchy where we have terminal preferences that are initially hardwired at a very low-level, on top of which are higher-level non-terminal preferences, with something akin to back-propagation allowing for non-terminal preferences to affect the low-level terminal preferences. Some preferences are so general that they are in constant interaction with a very large subset of all the preferences; these are experienced as things that are "core to our being", and we are much more likely to call these "values" rather than "preferences", although preferences and values are not different in kind.

I think of moral error as actions that go against the terminal (and closely associated non-terminal (which feedback to terminal)) and most general values (involving other moral beings) of a large class of human beings (either directly via this particular instance of the error affecting me or indirectly via contemplation of this type of moral error becoming widespread and affecting me in the future). I think of moral progress as changes to core values that result in more human beings having their fundamental values (like fairness, purpose, social harmony) flourish more frequently and more completely rather than be thwarted.

Why and how does anyone ever "do something they know they shouldn't", or "want something they know is wrong"?

Because the system of interdependent values is not a static system and it is not a consistent system either. We have some fundamental values that are in conflict with each other at certain times and in certain circumstances, like self-interest and social harmony. Depending on all the other values and their interdependencies, sometimes one will win out, and sometimes the other will win out. Guilt is a function of recognizing that something we have done has thwarted one of our own fundamental values (but satisfied the others that won out in this instance) and thwarted some fundamental values of other beings too (not thwarting the fundamental values of others is another of our fundamental values). The messiness of the system (and the fact that it is not consistent) dooms any attempt by philosophers to come up with a moral system that is logical and always "says what we want it to say".

Does the notion of morality-as-preference really add up to moral normality?

I think it does add up to moral normality in the sense that our actions and interactions will generally be in accordance with what we think of as moral normality, even if the (ultimate) justifications and the bedrock that underlies the system as a whole are wildly different. Fundamental to what I think of as "moral normality" is the idea that something other than human beings supplies the moral criterion, whereas under the morality-as-preference view as I described it above, all we can say is that IF you desire to have your most fundamental values flourish (and you are a statistically average human in terms of your fundamental values including things like social harmony), THEN a system that provides for the simultaneous flourishing of other beings' fundamental values is the most effective way of accomplishing that. It is a fact that most people DO have these similar fundamental values, but there is no objective criterion from the side of reality itself that says all beings MUST have the desire to have their most fundamental values flourish (or that the fundamental values we do have are the "officially sanctioned" ones). It's just an empirical fact of the way that human beings are (and probably many other classes of beings that were subject to similar pressures).

I've voiced my annoyance with the commenting system in the past, in particular that it is non-threaded and so often very difficult to figure out what someone is responding to if they don't include context (which they often don't), so I won't give details again.

On the topic of the 2 of 10 rule, if it's to prevent one person dominating a thread, shouldn't the rule be "no more than 2 of last 10 should be by the same person in the same thread" (so 3 posts by the same person would be fine as long as they are in 3 different threads)?

Optimistically, I would say that if the murderer perfectly knew all the relevant facts, including the victim's experience, ve wouldn't do it

The murderer may have all the facts, understand exactly what ve is doing and what the experience of the other will be, and just decide that ve doesn't care. Which fact is ve not aware of? Ve may understand all the pain and suffering it will cause, ve may understand that ve is wiping out a future for the other person and doing something that ve would prefer not to be on the receiving end of, may realize that it is behavior that if universalized would destroy society, may realize that it lessens the sum total of happiness or whatever else, may even know that "ve should feel compelled not to murder" etc. But at the end of the day, ve still might say, "regardless of all that, I don't care, and this is what I want to do and what I will do".

There is a conflict of desire (and of values) here, not a difference of fact. Having all the facts is one thing. Caring about the facts is something altogether different.

--

On the question of the bedrock of fairness, at the end of the day it seems to me that one of the two scenarios will occur:

(1) all parties happen to agree on what the bedrock is, or they are able to come to an agreement.

(2) all parties cannot agree on what the bedrock is. The matter is resolved by force with some party or coalition of parties saying "this is our bedrock, and we will punish you if you do not obey it".

And the universe itself doesn't care one way or the other.

But if I understand you, you are saying that human morality is human and does not apply to all sentient beings. However, as long as all we are talking about and all we really deal with is humans, then there is no difference in practice between a morality that is specific to humans and a universal morality applicable to all sentient beings, and so the argument about universality seems academic, of no import at least until First Contact is achieved.

What I am really saying is that the notion of "morality" is so hopelessly contaminated with notions of objective standards and criteria of morality above and beyond humanity that we would do good to find other ways to think and talk about it. But to answer you directly in terms of what I think about the two ways of thinking about morality, I think there is a key difference between (1) "our particular 'morality' is purely a function of our evolutionary history (as it expresses in culture)" and (2) "there is a universal morality applicable to all sentients (and we don't know of other similarly intelligent sentients yet)".

With 1, there is no justification for a particular moral system: "this is just the way we are" is as good as it gets (no matter how you try to build on it, that is the bedrock). With 2, there is something outside of humanity that justifies some moralities and forbids others; there is something like an objective criterion that we can apply, rather than the criterion being relative to human beings and the (not inevitable) events that have brought us to this point. In 1 the rules are in some sense arbitrary; in 2 they are not. I think that is a huge difference. In the course of making decisions in day-to-day existence -- should I steal this book? should I cheat on my partner? -- I agree with you that the difference is academic.

In particular, a lot of moral non-realists are wrong.

Yes, they're wrong, but I think the important point is "what are they wrong about"? Under 1, the claim that "it is merely a matter of [arbitrary] personal opinion" is wrong as an empirical matter because personal opinions in "moral" matters are not arbitrary: they are derived from hardwired tendencies to interpret certain things in a moralistic manner. Under 2, it is not so much an empirical matter of studying human beings and experimenting and determining what the basis for personal opinions about "moral" matters is; it is a matter of determining whether "it's merely a matter of personal opinion" is what the universal moral law says (and it does not, of course).

I concede that I was sloppy in speaking of "traditional notions", although I did not say that there were no philosophical traditions such that...; I was talking about the traditions that were most influential over historical times in western culture (based on my meager knowledge of ethics based on a university course and a little other reading). I had in mind thousands of years of Judeo-Christian morality that is rooted in what the Deity Said or Did, and deontological understandings or morality such as Kant (in which species-indepedendent reason compels us to recognize that ...), as well as utilitarianism (in the sense that the justification for believing that the moral worth of an action is strictly determined by the outcome is not based on our evolutionary quirks: it is supposed to be a rationally compelling system on its own, but perhaps a modern utilitarian might appeal to our evolutionary history as justification).

On the topic of natural law tradition, is it your understanding that it is compatible with the idea that moral judgments are just a subset of preferences that we are hardwired to have tendencies regarding, no different in kind to any other preference (like for sweet things)? That is the point I'm trying to make, and it's certainly not something I heard presented in my ethics class in university. The fact that we have a system that is optimized and pre-configured for making judgments about certain important matters is a far cry from saying that there is an objective moral law. It also doesn't support the notion that there are moral facts that are different in kind from any other type of fact.

It seems from skimming that natural law article you mentioned that Aquinas is central to understanding the tradition. The article quotes Aquinas as 'the natural law is the way that the human being “participates” in the eternal law' [of God]. It seems to me that again, we are talking about a system that sees an objective criterion for morality that is outside of humanity, and I think saying that "the way human beings happened to evolve to think about certain actions constitutes a objective natural law for human morality" is a rather tenuous position. Do you hold that position?

Laura ABJ: To expand on the text you quoted, I think that killing babies is ugly, and therefore would not do it without sufficient reason, which I don't think the scenario provides. The ugliness of killing babies doesn't need a moral explanation, and the moral explanation just builds on (and adds nothing but a more convenient way of speaking about) the foundation of aversion, no matter how it's dressed up and made to look like something else.

The idea is not compelling to me and so would not haunt me forever, because like I said, I'm not yet convinced that some X number of refreshing breezes on a hot day is strictly equivalent in some non-arbitrary sense to murdering a baby, and X+1 breezes is "better" in some non-arbitrary sense.

However, the idea of being haunted forever would bother me now if I thought it likely that my future self would think I made the wrong decision, but that implies that I have more knowledge and perspective now than I actually have (in order to know enough to think it likely that I'll be haunted). All I can do is make what I think is the best decision given what I know and understand now, so I don't see that I could think it likely that I would be haunted by what I did. Of course, I could make a terrible mistake, not having understood something I will later think I should have understood, and I might regret that forever, but I wouldn't realize that at the time and I wouldn't think it likely.

Hal: as an amoralist, I wouldn't do it. If there is not enough time to explain to me why it is necessary and convince me that it is necessary, no deal. Even if I thought it probably would substantially increase the future happiness of humanity, I still wouldn't do it without a complete explanation. Not because I think there is a moral fabric to the universe that says killing babies is wrong, but because I am hardwired to have an extremely strong aversion to like killing babies. Even if I actually was convinced that it would increase happiness, I still might not do it, because I'm still undecided on the idea that some number of people experiencing a refreshing breeze on a hot day is worth more than some person being tortured -- ditto for killing babies.

It seems to me that if you want to find people who are willing to torture and kill babies because "it will increase happiness", you need to find some extremely moral utilitarians. I think you'd have much better luck in that community than among amoralists ;-).

Traditional notions of morality are confused, and observation of the way people act does show that they are poor explanations, so I think we are in perfect agreement there. (I do mean "notion" among thinkers, not among average people who haven't given much though to such things.) Your second paragraph isn't in conflict with my statement that morality is traditionally understood to be in some sense objectively true and objectively binding on us, and that it would be just as true and just as binding if we had evolved very differently.

It's a different topic altogether to consider to whom we have moral obligations (or who should be treated in ways constrained by our morality). And it's another topic again to consider what types of beings are able to participate (or are obligated to participate in) the moral system. I wasn't touching on either of these last two topics.

All I'm saying is that I believe that what morality actually is for each of us in our daily lives is a result of what worked for our ancestors, and that is all it is. I.e., there is no objective morality and there is no ONE TRUE WAY. You can never say "reason demands that you must do ..." or "you are morally obligated by reality itself to ..." without first making some assumptions that are themselves not justifiable (the axioms that we have as a result of evolution). Anything you build on that foundational bedrock is contingent and not necessary.

Constant: I basically agree with the gist of your rephrasing it in terms of being relative to the species rather than independent of the species, but I would emphasize that what you end up with is not a "moral system" in anything like the traditional sense, since it is fundamental to traditional notions of morality that THE ONE TRUE WAY does not depend on human beings and the quirks of our evolutionary history and that it is privileged from the point of view of reality (because its edicts were written in stone by God or because the one true species-independent reason proves it must be so).

btw, you mean partial application rather than currying.

Currying is converting a function like the following, which takes a single n-tuple arg (n > 1) ["::" means "has type"]

-- f takes a 2-tuple consisting of a value of type 'x' and a value of type 'y' and returns a value of type 'z'.
f :: (x, y) -> z

into a function like the following, which effectively takes the arguments separately (by returning a function that takes a single argument)

-- f takes a single argument of type 'x', and returns a function that accepts a single argument of type 'y' and returns a value of type 'z'.
f :: x -> y -> z

What you meant is going from

f :: x -> y -> z

to

g :: y -> z
g = f foo

where the 'foo' argument of type 'x' is "hardwired" into function g.

I agree with mtraven's last post that morality is an innate functionality of the human brain that can't be "disproved", and yet I have said again and again that I don't believe in morality, so let me explain.

Morality is just a certain innate functionality in our brains as it expresses itself based on our life experiences. This is entirely consistent with the assertion that what most people mean by morality -- an objective standard of conduct that is written into the fabric of reality itself -- does not exist: there is no such thing!

A lot of confusion in this thread is due to some people taking "there is no morality" to mean there is nothing in the brain that corresponds to morality (and nothing like a moral system that almost all of us intuitively know) -- which I believe is obviously false, i.e., that there is such a system -- and others taking it to mean there is no objective morality that exists independently of thinking beings with morality systems built in to their brains -- which I believe is obviously true, i.e., that there is no objective morality. And of course, others have taken "there is no morality" to mean other things, perhaps following on some of Eliezer's rather bizarre statements (which I hope he will clarify) in the post that conflated morality with motivation and implied that morality is what gets us out of bed in the morning or causes us to prefer tasty food to boring food.

Morality exists as something hardwired into us due to our evolutionary history, and there are sound reasons why we are better off having it. But that doesn't imply that there is some morality that is sanctioned from the side of reality itself or that our particular moral beliefs are in any way privileged.

As a matter of practice, we all privilege the system that is hardwired into us, but that is just a brute fact about how human beings happen to be. It could easily have turned out radically different. We have no objective basis for ranking and distinguishing between alternate possible moralities. Of course, we have strong feelings nevertheless.

mtraven: many of the posters in this thread -- myself included -- have said that they don't believe in morality (meaning morality and not "values" or "motivation"), and yet I very highly doubt that many of us are clinically psychopaths.

Not believing in morality does not mean doing what those who believe in morality consider to be immoral. Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality.

Load More