What is Metaethics?

by lukeprog7 min read25th Apr 2011563 comments


Ethics & Morality

When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?

First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.

My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.

So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.

Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?

Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?

Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?

Others prefer to combine applied ethics and normative ethics so that the breakdown becomes: normative ethics vs. metaethics, or 'first order' moral questions (normative ethics) vs. 'second order' questions (metaethics).

Mainstream views in metaethics

To illustrate how people can give different answers to the questions of metaethics, let me summarize some of the mainstream philosophical positions in metaethics.

Cognitivism vs. non-cognitivism: This is a debate about what is happening when people engage in moral discourse. When someone says "Murder is wrong," are they trying to state a fact about murder, that it has the property of being wrong? Or are they merely expressing a negative emotion toward murder, as if they had gasped aloud and said "Murder!" with a disapproving tone?

Another way of saying this is that cognitivists think moral discourse is 'truth-apt' - that is, moral statements are the kinds of things that can be true or false. Some cognitivists think that all moral claims are in fact false (error theory), just as the atheist thinks that claims about gods are usually meant to be fact-stating but in fact are all false because gods don't exist.1 Other cognitivists think that at least some moral claims are true. Naturalism holds that moral judgments are true or false because of natural facts,2 while non-naturalism holds that moral judgments are true or false because of non-natural facts.3 Weak cognitivism holds that moral judgments can be true or false not because they agree with certain (natural or non-natural) opinion-independent facts, but because our considered opinions determine the moral facts.4

Non-cognitivists, in contrast, tend to think that moral discourse is not truth-apt. Ayer (1936) held that moral sentences express our emotions ("Murder? Yuck!") about certain actions. This is called emotivism or expressivism. Another theory is prescriptivism, the idea that moral sentences express commands ("Don't murder!").5 Or perhaps moral judgments express our acceptance of certain norms (norm expressivism).6 Or maybe our moral judgments express our dispositions to form sentiments of approval or disapproval (quasi-realism).7

Moral psychology: One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism). Another debate concerns whether motivation depends on both beliefs and desires (the Humean theory of motivation), or whether some beliefs are by themselves intrinsically motivating (non-Humean theories of motivation).

More recently, researchers have run a number of experiments to test the mechanisms by which people make moral judgments. I will list a few of the most surprising and famous results:

  • Whether we judge an action as 'intentional' or not often depends on the judged goodness or badness of the action, not the internal states of the agent.8
  • Our moral judgments are significantly affected by whether we are in the presence of freshly baked bread or a low concentration of fart spray that only the subconscious mind can detect.9
  • Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.10
  • People tend to insist that certain things are right or wrong even when a hypothetical situation is constructed such that they admit they can give no reason for their judgment.11
  • We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains.12
  • People give harsher moral judgments when they feel clean.13

Moral epistemology: Different views on cognitivism vs. non-cognitivism and moral psychology suggest different views of moral epistemology. How can we know moral facts? Non-cognitivists and error theorists think there are no moral facts to be known. Those who believe moral facts answer to non-natural facts tend to think that moral knowledge comes from intuition, which somehow has access to non-natural facts. Moral naturalists tend to think that moral facts can be accessed simply by doing science.

Tying it all together

I will not be trying very hard to fit my pluralistic moral reductionism into these categories. I'll be arguing about the substance, not the symbols. But it still helps to have a concept of the subject matter by way of such examples.

Maybe mainstream metaethics will make more sense in flowchart form. Here's a flowchart I adapted from Miller (2003). If you don't understand the bottom-most branching, read chapter 9 of Miller's book or else just don't worry about it. (Click through for full size.)

Next post: Conceptual Analysis and Moral Theory

Previous post: Heading Toward: No-Nonsense Metaethics


1 This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for he thinks that murder is not wrong or right. Rather, the error theorist claims that all moral statements which presuppose the existence of a moral property are false, because no such moral properties exist. See Joyce (2004). Mackie (1977) is the classic statement of error theory.

2 Sturgeon (1988); Boyd (1988); Brink (1989); Brandt (1979); Railton (1986); Jackson (1998). I have written introductions to the three major versions of moral naturalism: Cornell realism, Railton's moral reductionism (1, 2), and Jackson's moral functionalism.

3 Moore (1903); McDowell (1998); Wiggins (1987).

4 For an overview of such theories, see Miller (2003), chapter 7.

5 See Carnap (1937), p. 23-25; Hare (1952).

6 Gibbard (1990).

7 Blackburn (1984).

8 The Knobe Effect. See Knobe (2003).

9 Schnall et al. (2008); Baron & Thomley (1994).

10 Young et al. (2010). I interviewed the author of this study here.

11 This is moral dumfounding. See Haidt (2001).

12 Greene (2007).

13 Zhong et al. (2010).


Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26(6): 766-784.

Blackburn (1984). Spreading the Word. Oxford University Press.

Brandt (1979). A Theory of the Good and the Right. Oxford University Press.

Brink (1989). Moral Realism and the Foundations of Ethics. Cambridge University Press.

Boyd (1988). How to be a Moral Realist. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 181-122). Cornell University Press.

Carnap (1937). Philosophy and Logical Syntax. Kegan Paul, Trench, Trubner & Co.

Gibbard (1990). Wise Choices, Apt Feelings. Clarendon Press.

Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. MIT Press.

Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834

Hare (1952). The Language of Morals. Oxford University Press.

Jackson (1998). From Metaphysics to Ethics. Oxford UniversityPress.

Joyce (2001). The Myth of Morality. Cambridge University Press.

Knobe (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63: 190-193.

Mackie (1977). Ethics: Inventing Right and Wrong. Penguin.

McDowell (1998). Mind, Value, and Reality. Harvard University Press.

Miller (2003). An Introduction to Contemporary Metaethics. Polity.

Moore (1903). Principia Ethica. Cambridge University Press.

Schnall, Haidt, Clore, & Jordan (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34(8): 1096-1109.

Sturgeon (1988). Moral explanations. In Sayre-McCord (ed.), Essays in Moral Realism (pp. 229-255). Cornell University Press.

Railton (1986). Moral realism. Philosophical Review, 95: 163-207.

Wiggins (1987). A sensible subjectivism. In Needs, Values, Truth (pp. 185-214). Blackwell.

Young, Camprodon, Hauser, Pascual-Leone, & Saxe (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences, 107: 6753-6758.

Zhong, Strejcek, & Sivanathan (2010). A clean self can render harsh moral judgment. Journal of Experimental Social Psychology, 46 (5): 859-862


Rendering 500/562 comments, sorted by (show more) Highlighting new comments since Today at 5:28 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hm. What is this post for? It doesn't explain the ideas it refers to in any detail sufficient to feel what they mean, and from what it does tell, the ideas seem pretty crazy/simplistic, paying attention to strange categories, like that philpapers survey. (The part before "Mainstream views in metaethics" section does seem to address the topic of the post, but the rest is pretty bizarre. If that was the point, it should've been made, I think, but it probably wasn't.)

My posts are now going to feel naked to me whenever they lack a comment from you complaining that the post isn't book-length, covering every detail of a given topic. :)

Like I said, I don't have much interest in fitting my views into the established categories, but I wanted to give people an overview of how metaethics is usually done so they at least have some illustrations of what the subject matter is.

And if you find mainstream metaethics bizarre, well... welcome to a diseased discipline.

5Amanojack10ySince you understand how diseased the discipline of ethics is, I'm hoping the next post in the series will focus heavily on clearing up the semantic issues that have made it so diseased. I don't think any real sense can be made of metaethics until the very nature of what someone is doing when they utter an ethical statement is covered. We use language to do a lot of things: express emotions, make other people do stuff, signal, intimidate, get our thoughts into other people's minds, parrot what someone else said - and often more than one of these at a time. Since we presumably are trying to get at the speaker's intention, we really can't know the "meaning" without asking the speaker, yet various metaethical theorists call themselves emotivists, error theorists, prescriptivists, and so on. It seems to me the choice of an meta-ethical theory boils down to a choice of what the theorists wants to presume people are trying to do when they use the word ought. Surely no one can deny that sometimes some people do indeed intend "You ought not steal" as a command, or as a way of expressing disgust at the notion of theft, or simply as a means of intimidation. My meta-meta-ethical theory is that it all depends on what the person uttering the statement intends to accomplish by saying it. A debate between these meta-ethical theories sounds very likely to revolve around whose definition of ought is "correct" [http://lesswrong.com/lw/np/disputing_definitions/]. In short, I think the main reason ethics is so diseased as a discipline is that the theorists are trying to argue whose definition is better, rather than acknowledging that it is pretty hard for anyone to know what each person intends by their moralistic language.
5Clippy10yMy definition of "ought" is correct.
3lukeprog10yYes, I agree with all this.
0[anonymous]10yMaybe the preoccupation with "statements" is part of the disease. After all, there would probably be ethics even without language or with a very different language. And after all, when investigating x, you should investigate x, not statements about x.
3CuSithBell10yBut first you need to identify x. Which is a question about the meaning of a word.
0Amanojack10yThough Bongo is surely right there would be moral sentiments even without language, now we are dealing with something identified: specific emotions like empathy, sense of justice, disgust, indignation, pity. Yeah those would exist without language. And yes, language has made things much more complicated, and the preoccupation with analyzing sentences makes it even even worse. If people can realize all that without looking at the very nature of communication, that would be great, but in my experience most people feel hesitant about scrapping so many centuries of philosophy and need to see how the language makes such a mess of things before they can truly feel comfortable with it. If Bongo is ready to scrap language analysis now and drop all the silly -isms, I'm preaching to the choir.
2Amanojack10yEthics is unique, at least to me, in that I still have no idea what the heck people are even referring to most of the time when they use moralistic language. I can't investigate X until I know what X is even supposed to be about. Most of the time there is a fundamental failure to communicate, even regarding the definition of the field itself. And whenever there isn't such a failure, the problem disappears and all discussants agree as if nothing.
2lukeprog10yI should add that nobody who has read and understood the sequences should be surprised by what I'll describe as 'pluralistic moral reductionism.' I'm writing this sequence because I think this basic view on standard metaethical questions hasn't yet been articulated clearly enough for my satisfaction. And then, I want to make a bit of progress on the hard questions of 'metaethics' (it depends where you draw the boundary around 'metaethics') - but only after I've swept away the easy questions of metaethics.

This post covered at least as much material as my old college moral philosophy classes did in a month. It also left me feeling more confident that I understood all the terms involved than that month of classes did. Thank you for being able to explain difficult things clearly and concisely.

9Scott Alexander10yI request an explanation of why my comment telling Luke he did a good job is more highly upvoted than the post Luke did a good job on. If you agree with me that Luke did a good job strongly enough to upvote the statement, why not upvote Luke?

Couldn't that just be due to a higher number of total votes (both up an down) for the OP? I would assume fewer people read each comment, and downvoters may have decided to only weigh in on the OP. A hypothetical controversial post could have a karma of 8, with 10 downvotes negating 10 upvotes, and a supportive comment could have 9 upvotes due to half of the upvotes of the first post giving it their vote. The comment has higher karma, but lower volatility, so to speak.

0wedrifid10yGood explanation.
4prase10yI have upvoted your comment because it gives a feedback to the author, which should be encouraged (negative feedback leads to improvement, but surely we don't want to read only disapproval, do we?). Not always when I upvote a comment, I agree with its content.
3TheOtherDave10yOddly, the comment is now less upvoted than the post, but your request for an explanation is being downvoted. I'm kinda curious as to the underlying thought processes now, myself.
6NancyLebovitz10yThis is making me wonder if karma can cause people to model LW as having a group mind, and if people generally think of social groups which are too large to model each individual as being group minds.
1TheOtherDave10yI'm not sure if it's related to what you're wondering, but if it helps clarify anything I'll add that I don't exactly know what a group mind is, or what exactly it means to model a group as one, but that when I ask questions of a forum (or, as in this case, mention to a forum that I'm curious about something) I expect that a large number of individuals will read the question, decide individually whether they have a useful answer and whether they feel like providing it, and act accordingly. In this case, more specifically, I figured that the people whose voting patterns matched the group-level behavior -- e.g., the ones who upvoted Yvain but not Luke at first, or who downvoted Yvain's request for explanation -- might address my curiosity with personal anecdotes... and potentially that various other people would weigh in with theories.
1NancyLebovitz10yWhat I was thinking of with the "group mind" is that it can be tempting if one is flamed by a few people in a group, to feel as though the whole group is on the attack.
1wedrifid10yFor my part I model karma interactions and group thinking processes here via subgroups (which are not necessarily mutually exclusive). There are also a few who get their own model - which is either a compliment, insult or in some cases both.
0[anonymous]10yTolerate tolerance? For example, I downvoted the post, but not your comment.
0RobinZ10yI expect to upvote this after I can see how it fits into the sequence better.
2Emile10yWrongBot said something similar, but I found it a bit hard to follow, especially since I'm unfamiliar with some of the terminology like "natural facts", and also because keeping track of a lot of newly-introduced terminology describing the various positions is not easy.

"Oi, I just saw a smeert!"

"What's a smeert?"

"So you believe in smeerts then?"

I still have a hard time seeing how any of this is going to go somewhere useful.

4lukeprog10yLuckily, for the moment, some people are already finding it useful.
2thomblake10yHere is my understanding: Ethics is the study of what one has most reason to do or want. On that definition, it is directly relevant to instrumental rationality. And if we want to discover the facts about ethics, we should determine what sort of things those facts would be, so that we might recognize them when we've found them - this, on one view, is the role of metaethics. This post is an intro to current thought on metaethics, which should at least make more clear the scope of the problem to any who would like to pursue it.

What does "natural fact" mean?

4lukeprog10yIt means different things to different people. Moore (1903) wrote: Alternatively, Baldwin (1993) suggests: Warnock's (1960) interpretation of Moore was: Miller (2003) concludes:
8Will_Newsome10yIf you plan on using the word 'naturalistic' to describe your meta-ethics at some point, I hope you give a better definition than these philosophers have given. "Naturalistic" often seems to be a way of saying "there is no magic involved!", but it's not like metaphysical phenomena are necessarily magical. Using logical properties of symmetric decision algorithms to solve timeless coordination problems, for instance, doesn't fit into Miller's definition of natural properties, but it's probably somewhat tied up into some facets of meta-ethics (or morality at the very least, but that line is easily blurred and probably basically shouldn't exist in a correct technical solution). I'm really just trying to keep a relevant distinction between "naturalistic" and "metaphysical" which are both interesting and valid, instead of having two categories "naturalistic" and "magical" where you get points for pointing out how non-magical and naturalistic a proposed solution is. This stems from a general fear of causal / timeful / reductionist explanations that could miss important points about teleology / timelessness / pattern attractors / emergence, e.g. the distinction between timeless and causal decision theory or between timeless and causal validity semantics (if there is one), which have great bearing on reflective/temporal consistency and seem very central to meta-ethics. I don't think you're heading there with your solution to meta-ethics, but as an aside I'm still confused about what it is you're trying to solve if you're not addressing any of these questions that seem very central. Your past selves' utility functions are just evidence. Meta-ethics should tell you how to treat that evidence, just as it should tell you how future selves should treat your present utility function as evidence. Figuring out what my past selves' or others' utility functions are in some sense is of course a necessary step, but even after you have that data you still need to figure out what the
3torekp10yA better answer than any that Luke cited would start with the network of causal laws paradigmatically considered "natural," such as those of physics and chemistry, then work toward properties, relations, objects and facts. There might (as a matter of logical possibility) have been other clusters of causal laws, such as supernatural or non-natural laws, but these would be widely separated from the natural laws with little interaction (pineal gland only?) or very non-harmonious interaction (gods defying physics). We had a discussion about this earlier. I will try to dig up a link [http://lesswrong.com/lw/4xt/the_supernatural_category/].

Based upon my experiences, physical truths appear to be concrete and independent of beliefs and opinions. I see no cases where "right" has a meaning outside of an agent's preferences. I don't know how one would go about discovering the "rightness" of something, as one would a physical truth.

It is a poor analogy.

Edit: Seriously? I'm not trying to be obstinate here. Would people prefer I go away?

New edit: Thanks wedrifid. I was very confused.

2wedrifid10yYou're not being obstinate. You're more or less right, at least in the parent. There are a few nuances left to pick up but you are not likely to find them by arguing with Eugine.
[-][anonymous]10y 8

You're promoting illusion of transparency. Just explain what you mean, already.

I'm sorry. It's clear that you're motivated to "win" an argument, not get at reality.

For the record, words do not have intrinsic meanings. If you are willing to use simpler words that we are likely to agree on to explain what you mean by "moral", "right" and "good" then I will be happy to read it. Otherwise, I just cannot take you seriously enough to continue this.

EDIT: If you really would like to discuss this I suggest we move to the LessWrong IRC channel instead of making a long person to person thread here.

I am increasingly getting the perception that morality/ethics is useless hogwash. I already believed that to be the case before Less Wrong and I am not sure why I ever bothered to take it seriously again. I guess I was impressed that people who are concerned with 'refining the art of rationality' talk about it and concluded that after all there must be something to it. But I have yet to come across a single argument that would warrant the use of any terminology related to moral philosophy.

The article Say Not "Complexity" should have been about mo... (read more)

3[anonymous]10yI would argue that the problem is not with morality, but with how it is being approached here. This is a starting point for understanding morality. is utilitarianism, which seems to be the house approach to morality - the very approach which you find unpersuasive. Not quite. It's possible to wish a person dead, while being reluctant to kill him yourself, and even while considering anyone who does kill him a murderer who needs to be caught and brought to justice. Morality derives from preferences in a way, but it is indirect. An analogous phenomenon is the market price. The market price of a good derives from the preferences of everyone participating in the market, but the derivation is indirect. The price of a good isn't merely what you would prefer to pay for it, because that's always zero. Nor is it merely what the seller would prefer to be paid for it, because there is no upper limit on what he would charge if he could. Rather, the market price is set by supply and demand, and supply and demand depend in large part on preferences. So price derives from preferences, but the derivation is indirect, and it is mediated by interaction between people. Morality, I think, is similar. It derives from preferences indirectly, by way of interaction. This leaves open the possibility that morality is as variable as prices, but I think that because of the preferences that it rests on, it is much, much less variable, though not invariable. Natural selection holds these preferences largely in check. For example, if some genetic line of people were to develop a preference for being slaughtered, they would quickly die out.
-1XiXiDu10yThis just shows that human wants are inconsistent, that humans are holding conflicting ideas simultaneously, why invoke 'morality' in this context? People or road blockades, what's the difference? I just don't see why one would talk about morality here. The preferences of other people are simply more complex road blockades on the way towards your goal. Some of those blockades are artistically appealing so you try to be careful in removing them...why invoke 'morality' in this context?
2[anonymous]10yBut these two desires are not inconsistent, because for someone to die by, say, natural causes, is not the same thing as for him to die by your own hand. You could say the same thing about socks. E.g., "I just don't see why one would talk about socks here. Socks are simply complex arrangements of molecules. Why invoke "sock" in this context?" What are you going to do instead of invoking "sock"? Are you going to describe the socks molecule by molecule as a way of avoiding using the word "sock"? That would be cumbersome, to say the least. Nor would it be any more true. Socks are real. They aren't imaginary. That they're made out of molecules does not stop them from being real. All this can be said about morality. What are you going to do instead of invoking "morality"? Are you going to describe people's reactions as a way of avoiding using the word "morality"? That would be cumbersome, to say the least. Nor would it be any more true. Morality is real. It isn't imaginary. That it's made out of people's reactions doesn't stop it from being real. Denying the reality of morality simply because it is made out of people's reactions, is like denying the reality of socks simply because they're made out of molecules.
3XiXiDu10yConsider the the trolley problem. Naively you kill the fat guy if you care about other people and also if you only care about yourself, because you want others to kill the fat guy as well because you are more likely to be one of the many people tied to the rails than the fat guy. Of course there is the question about how killing one fat guy to save more people and similar decisions could erode society. Yet it is solely a question about wants, about the preferences of the agents involved. I don't see how it could be helpful to add terminology derived from moral philosophy here or elsewhere.
1XiXiDu10yI am going to use moral terminology in the appropriate cultural context. But why would one use it on a site that supposedly tries to dissolve problems using reductionism as a general heuristic? I am also using the term "free will [http://spacecollective.org/XiXiDu/5759/Free-will-as-nonlinear-transformational-effectiveness] " because people model their decisions according to that vague and ultimately futile concept. But if possible (if I am not too lazy) I avoid using any of those bogus memes. Of course, it is real. Cthulhu is also real, it is a fictional cosmic entity. But if someone acts according to their fear of Cthulhu I am not going to resolve their fear by talking about it in terms of the Lovecraft Mythos but in terms of mental illness. How so? Can you give an example where the use of terminology derived from moral philosophy is useful instead of obfuscating?
-1XiXiDu10yConsider the Is–ought problem. The basis for every ought statement is what I believe to be correct with respect to my goals. If you want to reach a certain goal and I want to help you and believe to know a better solution than you do then I tell you what you ought to do because 1.) you want to reach a goal 2.) I want you to reach your goal 3.) my brain does exhibit a certain epistemic state making me believe to be able to satisfy #1 & #2.
-2[anonymous]10yIt is no more a philosophical puzzle that needs dissolving than prices are a philosophical puzzle that need dissolving. I think that the concept of "free will" may indeed be more wholly a philosopher's invention, just as the concept of "qualia" is in my view wholly a philosopher's invention. But the everyday concepts from which it derives are not a philosopher's invention. I think that the everyday concept that philosophers turned into the concept of "free will" is the concept of the uncoerced and intentional act - a concept employed when we decide what to do about people who've annoyed us. We ask: did he mean to do it? Was he forced to do it? We have good reason for asking these questions. Philosophers invent bogus memes that we should try to free ourselves of. I think that "qualia" are one of those memes. But philosophers didn't invent morality. They simply talked a lot of nonsense about it. Morality is real in the sense that prices are real and in a sense that Cthulhu is not real. Some people talk about money in the way that you want to talk about morality, so that's a nice analogy to our discussion and I'll spend a couple of paragraphs on it. They say that the value of money is merely a collective delusion - that I value a dollar only because other people value a dollar, and that they value a dollar only because, ultimately, I value a dollar. So they say that it's all a great big collective delusion. They say that if people woke up one day and realized that a dollar was just a piece of paper, then we would stop using dollars. But while there is a grain of truth to that (especially about fiat money), there's also much that's misleading in it. Money is a medium of exchange that solves real problems. The value of money may be in a sense circular (i.e., it's valued by people because it's valued by people), but actually a lot of things are circular. A lot of natural adaptations are circular, for example symbiosis. Flowers are the way they are because bees are th
1Morendil10yOtherwise known as The True Knowledge [http://cscs.umich.edu/~crshalizi/reviews/cassini-division/true-knowledge.html].
0hairyfigment10yYou just did use it. Now, in this case we could probably rephrase your statement without too much trouble. But it does not seem at all obvious that doing this for all of our beliefs has positive expected value if we just want to maximize epistemic or instrumental rationality.
0endoself10yI agree with most of this. The only reason for using the word morality is when talking to someone who does not realize that "Whatever you want." is the only answer that really can be given to the question of "What should I do next?". (Does that sentence make sense?) The main thing I have to add to this is what Eliezer describes here [http://lesswrong.com/lw/si/math_is_subjunctively_objective/]. The causal 'reason' that I want people to be happy is because of the desires in my brain, but the motivational 'reason' is because happiness matches {happiness + survival + justice + individuality + ...}, which sounds stupid, but that is how I make decisions; I look for what best matches against that pattern. [http://lesswrong.com/lw/sm/the_meaning_of_right/] These two reasons are important to distinguish - "If neutrinos make me believe '2 + 3 = 6', then 2 + 3 = 5". Here, people use that world 'morality' to describe an idealized version of their decision processes rather than to describe the desires embodied in their brain in order to emphasize that point, and also because of the large number of people that find this pseudo-equivalence nonobvious.
0XiXiDu10yIf you are confused about facts in the world then you are talking about epistemic rationality, why would one invoke 'morality' in this context?
-2endoself10yI'm not sure I understand this. Are you objecting to my use of the word 'idealized', on the grounds that preferences and facts are different things and uncertainty is about facts? I would disagree with that. Someone might have two conflicting but very strong preferences. For example, someone might be opposed to homosexuality based on a feeling of disgust but also have a strong feeling that people should have some sort of right to self-determination. Upon sufficient thought, they may decide that the latter outweighs the former and may stop feeling disgust at homosexuals as a result of that introspection. I believe that this situation is one that occurs regularly among humans.
-1Peterdjones10yBut that is not the answer if someone wants to murder someone. What you have here is actually a reductio ad absurdam o the simplistic theory that morals=desires.
4NMJablonski10yIt only isn't the answer if you have a problem with that particular person being murdered, or perhaps an objection to killing as a principle. I also would object to wanton, chaotic, and criminal killings, but that is because I have a complex network of preferences that inform that objection, not because murder has some intrinsic property of absolute "wrongness". It is all preferences, and to think otherwise is the most frequent and absurd delusion still prevalent in rationalist communities. Even when a moralistic rationalist admits that moral truths and absolutes do not exist, they continue operating as if they do. They will say: "Well, there may not be absolute morality, but we can still tell which actions are best for (survival of human race / equality among humans / etc)." The survival of the human race is a preference! One which not all possible agents share, as we are all keenly aware of in our discussions of the threat posed by superintelligent AI's that don't share our values. There is no obligation for any mind to adopt any values. You can complain about that reality. You can insist that your preferences are the one, true, good and noble preferences, but no rational agent is obligated, in any empirical sense, to agree with you.
3Amanojack10yIt just depends on if "should" is interpreted as "what would best fulfill my wants now" or "what would best fulfill your wants now" (or as something else entirely). We can't make sense of ethical language until we realize different people mean different things by it.
2Gray10yAnd that's what morality always was in the first place. It's a way of getting other people to do otherwise than what they wanted to do. No one would be convinced by "I don't want you to kill people", but if you can convince someone that "It is wrong to kill people", then you've created conflict in that person's desires. I wonder, in the end, if people here truly want to "be rational" about morality. Myself, I'm not rational about morality, I go along with it. I don't critique it in my personal life. For instance, I refuse to murder someone, no matter how rational it might be to murder someone. Stick to epistemic rationality, and instrumental rationality, but avoid at all costs normative rationality, is my opinion.
3[anonymous]10yThis is a widespread but mistaken theory of morality. After all, we don't - and can't - convincingly say that just any old thing is "wrong". Here, I'll alternate between saying that actually wrong things are wrong, and saying that random things that you don't want are wrong. Actually wrong: "it's wrong to kill people." Yup, it is. You just don't want it: "it's wrong for you to arrest me just because I stabbed this innocent bystander to death." Yeah, right. Actually wrong: "it's wrong to mug people." No kidding. You just don't want it: "it's wrong for you to lock your door when you leave the house, because it's wrong for you to do anything to prevent me from coming into your house and taking everything you own to sell on the black market". Not convincing. If there were nothing more to things being wrong than that you use the word "wrong" to get people to do things, then there would be no difference between these four attempts to get people to do something. But there is: in the first and third case, the claim that the action is wrong is true (and therefore makes a convincing argument). In the second and fourth case, the claim is false (and therefore makes for an unconvincing argument). Sure, you can use the word "wrong" to get people to do things that you want them to do, but you can use a lot of words for that. For example, if you're somebody's mother and you want them to avoid driving when they're very sleepy, you can tell them that it's "dangerous" to drive in that condition. But as with the word "wrong", you can't use the word "dangerous" for just any situation, because it's not true in just any situation. When a proposed action is really dangerous - or really wrong - then you can use that fact to convince them not to pursue that action. But it's still a fact, independent of whether you use it to get other people to do things you want.
0Amanojack10yObjective ethics on LW? I'm a little shocked. This whole post is basically argument from popularity (perhaps more accurate to call it argument from convincingness). Judgments of valuation may be universal or quasi-universal, but they are always subjective. Words like "right" and "wrong" (and "innocent" and "own") and other objective moralistic terms obscure this, so let me do some un-obscuring. You have this backwards: The claim makes a convincing argument (to you and many others), therefore you call the claim "right"; or the claim makes an unconvincing argument against the action, therefore you call the claim "wrong." Notice you had to tuck in the word "innocent," which already implies your conclusion [http://en.wikipedia.org/wiki/Begging_the_question] that it is "actually wrong" to harm the bystander. Here you used the word "own," which again already implies your conclusion that it is wrong to steal it. Both examples are purely circular. Most people are disgusted by killing and theft, and they may be counterproductive from most people's points of view, but that is just about all we can say about the matter - and all we need to say. We are disgusted, so we ban such actions. Moral right and wrong are not objective facts. The fact that you and I subjectively experience a moral reaction to killing and theft may be an objective fact, but the wrongness itself is not objective, even though it may be universal or near-universal (that is, even though almost everyone else may feel the same way). Universal subjective valuation is not objective valuation (this latter term is, I contend, completely meaningless - unless someone can supply a useful definition). Although he was speaking in the context of economics, Ludwig von Mises gave the most succinct explanation of why all valuation is subjective when he said, "We originally want or desire an object not because it is agreeable or good, but we call it agreeable or good because we want or desire it."
-2[anonymous]10yYou could say that about any word in the English language. Let's try this with the word "rain". On many occasions, a person may say "it's raining and therefore you should take an umbrella". On some occasions this claim will be false and people will know that it's false (e.g. because they looked out a window and saw that it wasn't raining), and so the argument will not be convincing. What you're doing here can be applied to this rain scenario. You could say: That is, the claim that it's raining makes a convincing argument on some occasions, and on those occasions you call the claim "right". On other occasions, the claim makes an unconvincing argument, and on those occasions you call the claim "wrong". So there, we've applied your theory about the concept of morality, to the concept of rain. Your theory could equally well be applied to any concept at all. That is, your theory is that when we are convinced by arguments that employ claims about morality, then we call the claims "right". But you could equally well come up with the theory that when we are convinced by arguments that employ claims about rain, then we call the claims "right". So what have we demonstrated? With your help, we have demonstrated that in this respect, morality is like rain. And like everything else. Morality is like atoms. Morality is like gravity - in this respect. You have highlighted a property of morality which is shared by absolutely everything else in the universe that we have a word for. And this property is, that you can come up with this reverse theory of it, according to which we call claims employing the term "right" when we are convinced by arguments using those claims. For me to be guilty of begging the question I would have to be trying to prove that a murder was committed in the hypothetical scenario. But it's a hypothetical scenario in which it is specified that the person committed murder. Here's the hypothetical scenario, more explicit: someone has just committed a murd
1Amanojack10yYou misread me, though perhaps that was my fault. Does the bold help? I was talking about you (Constant), not "you" in the general sense. I wasn't presenting a theory of morality; I was shedding light on yours by suggesting that you are only calling these things right or wrong because you find the arguments convincing. No, you'd have to be trying to justify your statement that "it is wrong to kill people," which it seems you were (likewise for the theft example). Maybe your unusual phrasing confused me as to what you were trying to show with that. Anyway, the daughter posts seem to show we agree on more than it appears here, so bygones. As for the rest about "[my] reverse theory of morality," that's all from the above misunderstanding. (Sorry to waste time with my unclear wording.)
-2[anonymous]10yOkay, but even on this reading you could "shed" similar "light" on absolutely any term that I ever use. You're not proving anything special about morality by that. To do that would require finding differences between morality and, say, rain, or apples. But if we were arguing about apples you could make precisely the same move that you made in this discussion about morality. Here's a parallel back-and-forth employing apples. Somebody says: I reply: Here, let me construct an example with apples. Somebody goes to Tiffany's, points to a large diamond on display, and says to an employee, "that is an apple, therefore you should be willing to sell it to me for five dollars, which is a great price for an apple." This claim is false, and therefore makes for an unconvincing argument. Somebody replies: * I interpret "right" and "wrong" here as meaning "true" and "false", because claims are true or false, and these are referring to claims here. To which they follow up: ** I am continuing the previous interpretation of "right" and "wrong" as meaning, in context here, "true" or "false". If this is not what you meant then I can easily substitute in what you actually meant, make the corresponding changes, and make the same point as I am making here. What all this boils down to is that my interlocutor is saying that I am only calling claims about apples true or false because I find the arguments that employ these claims convincing or unconvincing. For example, if I happen to be in Tiffany's and somebody points to one of the big shiny glassy-looking things with an enormous price tag and says to an employee, "that is an apple, and therefore you should be happy to accept $5 for it", then I will find that person's argument unconvincing. My interlocutor's point is that I am only calling that person's claim (that that object is an apple) false because I find his argument (that the employee should sell it to him for $5) unconvincing. Whereas my own account is as follows: I first o
1Amanojack10yIt seems to me that right and wrong being objective, just like truth and falsehood, is what you've been trying to prove all this time. To equate "right and wrong" with "true and false" by assumption would be to, well you know, beg the question. It's not surprising that it always comes back to circularity, because a circular argument is the same in effect as an unjustified assertion, and in fact that's become the theme of not just our exchange here, but this entire thread: "objective ethics are true by assertion." I think we agreed elsewhere that ethical sentiments are at least quasi-universal; is there something else we needed to agree on? Because the rest just looks like wordplay to me.
-2[anonymous]10yI'm not equating moral right and wrong with true and false. I was disambiguating some ambiguous words that you employed. The word "right" is ambiguous, because in one context it can mean "morally righteous", and in another context it can mean "true". I disambiguated the words in a certain direction because of the immediate textual context. Apparently that was not what you meant. Okay - so ideally I should go back and disambiguate the words in the opposite direction. However, I can tell you right now it will come to the same result. I don't really want to belabor this point so unless you insist, I'm not actually going to write yet another comment in which I disambiguate your terms "right" and 'wrong" in the moral direction.
0CuSithBell10yBut, ah, you can observe the properties of the object in question, and see that it has very few in common with the set of things that has generated the term "apple" in your mind, and many in common with "diamond". Is this the same sense in which you say we can simply "recognize" things as fundamentally good or evil? That would make these terms refer to "what my parents thought was good or evil, perturbed by a generation of meaning-learning". The problem there is - apples are generally recognizable. People disagree on what is right or wrong. Are even apples objective?
2[anonymous]10yPeople can disagree about gray areas between any two neighboring terms. Take the word "apple". Apple trees are, according to Wikipedia, the species "Malus domestica". But as evolutionary biologists postulated (correctly, as it turns out), species are gradually formed over hundreds or thousands or millions of years, and the question of what is "the first apple tree" is a question for which there is no crystal clear answer, nor would there be even if we had a complete record of every ancestor of the apple tree going back to the one-celled organisms. Rather, the proto-species that gave rise to the apple tree gradually evolves into the apple tree, and about very early apple trees two fully informed rational people might very well disagree about which ones are apple trees and which ones are proto-apple trees. This is nothing other than the sorites problem, the problem of the heap, the problem of the vagueness of concepts. It is universal and is not specifically true about moral questions. Morality is, I have argued, an aspect of custom. And it's true that people can disagree, on occasion, about whether some particular act violates custom. So custom is, like apples, vague to some degree. Both apples and custom can be used as examples of the sorites problem, if you're sick of talking about sand heaps. But custom is not radically indeterminate. Customs exist, just as apples exist.
2Amanojack10yWell I agree with this basically, and it reminds me of John Hasnas writing about customary legal systems. I find that when showing this to people I disagree with about ethics we usually end up in agreement:
2[anonymous]10yThe quote from John Hasnas seems to be very close to my own view.
0CuSithBell10yAh, okay! We don't disagree then. Thanks for clearing that up! ETA: Actually, with that clarification, I'd expect many others to agree as well - at least, it seems like what you mean by "custom" and what other posters have called "stuff people want you to do" coincide.
0[anonymous]10yAn important point is that nobody gets to unilaterally decide what is or is not custom. That's in contrast to, say, personal preference, which each person does get to decide for themselves.
0CuSithBell10yRight. Though I'd argue that custom implies that morality is objective, and therefore that custom can be incorrect, so that someone can coherently say that their own society's customs are immoral (though probably from within a subculture that supports those alternate customs).
0Amanojack10yThat's one of the things morality has been, and it could indeed be the main thing, but my point above is it all depends on what the person means. Even though getting other people to do something might be the main and most important role of moral language historically, it only invites confusion to overgeneralize here - though I know how tempting it is to simplify all this ethical nonsense floating around in one fell swoop. Some people do simply use "ought" to mean, "It is in your best interest to," without any desire to get the person to do something. Some people mean "God would disapprove," and maybe they really don't care if that makes you refrain from doing it or not, but they're just letting you know. These little counterexamples ruin the generalization, then we're back to square one. I think the only way to really simplify ethics is to acknowledge that people mean all sorts of things by it, and let each person - if anyone cares - explain what they intended in each case. No, scratch that. The reason ethics is so confused is precisely because people have tried to simplify a whole bunch of disparate-but-somewhat-interrelated notions into a single type of phrasing. A full explanation of everything that is called "ethics" would require examination of religion, politics, sociology, psychology, and much more. For most things that we think we want ethics for, such as AI, instead of trying to figure out that complex of sundry notions shoehorned into the category of ethics, I think we'd be better off just assiduously hugging the query [http://lesswrong.com/lw/ly/hug_the_query/] for each question we want to answer about how to get the results we want in the "moral" sphere (things that hit on your moral emotions, like empathy, indignation, etc.). Mostly I'm interested in this series of posts for the promise it presents for doing away with most of the confusion generated by wordplay such as "objective ethics," which I consider to be just an artifact of language.
0Clippy10yWhat if "should" is interpreted as "Instantiators of decision theories similar to this one would achieve a higher value on their utility function if similar decision theories would yield this action as output"?
0CuSithBell10yOne thing it seems to be used for around here is "what should you never do even if you should". E.g. it's usually a really bad idea (wrt your own wants) to murder someone, even in a large proportion of cases where you think it's a good idea.
0endoself10yIf you don't want someone to murder, you can try to stop them, but they aren't going to agree to not murder unless they want to.
-1Peterdjones10yWant to before they have had their preferences re arranged by moral exhortation, or after?
0endoself10yI was referring to only fully logical arguments. Obviously it is possible to prevent someone from murdering by expressing extreme disapproval or locking them up.
0FAWS10yI agree with you that morality can mostly be framed in terms of volation and an adequate decision theory, but I think you are oversimplifying. For example consider people talking about what other people should want purely for their own good. That might be explainable in terms of projecting their own wants in some way (or perhaps selfish self-delusion), but it doesn't seem like something you could easily predict in advance from reasoning about wants if you were unfamiliar with how people act among each other.
-1Morendil10yTalking about wants isn't necessarily any simpler than talking about shoulds. We seem to be just as confused about either. For instance, how many people say they want to be thin, yet overeat and avoid exercise?
2Alicorn10yI think "I want to be thin" has an implied "ceteris paribus". Ceteris ain't paribus. You could as well say, "How many people say they want to have money, yet spend it on housing, feeding, and clothing themselves and avoid stealing?"
4Clippy10yI want to have money, I don't spend it on clothing, and I do avoid stealing. Edit: This information may or may not be relevant to anyone's point.
0Morendil10yThere seems to be a difference here - how much money you earn isn't perceived as entirely a matter of choice, or at any rate there will be a significant and unavoidable lead time between deciding to earn more and actually earning more. Whereas body shape is within our immediate sphere of control: if we eat less and work out more, we'll weigh less and bulk up muscle mass, with results expected within days to weeks. When I say "I can move my arm if I want", this is readily demonstrated by moving my arm. Is this the same sense of "want" that people have in mind when they say "I want to eat less" or "I want to quit smoking"? The distinction that seems to appear here is between volition - making use of the connection between our brains and our various actuators - and preference - the model we use to evaluate whether an imagined state of the world is more desirable than another. We conflate both in the term "want". We are often quite confused as to what volitions will bring about states of the world that agree with our preferences. (How many times have you heard "That's not what I wanted to say/write"?)
5Alicorn10yI categorically reject your disanalogy from both directions. I have been eating about half as much as usual for the past week or so, because I'm on antibiotics that screw with my appetite. I look the same. Once, I did physically intense jujitsu twice a week for months on end, at least quadrupling the amount of physical activity I got in each week. I looked the same. If "eating less and working out more" put my shape under my "immediate sphere of control" with results "within days to weeks", this would not be the result. You are wrong. Your statements may apply to people with certain metabolic privileges, but not beyond. By contrast, if I suddenly decide that I want more money, I have a number of avenues by which I could arrange that, at least on a small scale. It would be mistaken of me to conclude from this abundance of available financial opportunity that everyone chooses to have the amount of money they have, and that people with less money are choosing to take fewer of the equally abundant opportunities they share with the rich.
0Morendil10yOK, allowing that the examples may have been poorly chosen - the main point I'm making is that people often a) say they want something, b) act in ways that do not bring about what they say they want. Your response above seems to be that when people say "I want to be thin", they are speaking strictly in terms of preference: they are expressing that they would prefer their world to be just as it is now, with the one amendment that they are a certain body type rather than their current. Similarly when saying they want money. There are other cases where volition and preferences appear at odds more clearly. People say "I want to quit smoking", but they don't, when it's their own voluntary actions which bring about an undesired state. The distinction seems useful, even if we may disagree on the specifics of how hard it is to align volition and preference in particular cases. I'm not the first [http://lesswrong.com/lw/1bj/the_shadow_question/] to observe that "What do you want" is a deeper question than it looks like, and that's what I meant to say in the original comment. When you examine it closely "do people actually want to smoke" isn't a much simpler question than "should there be a law against people smoking" or "is it right or wrong to smoke". It is possible that these questions are in fact entangled in such a way that to fully answer one is also to answer the others.
3Alicorn10yI think people sometimes use wanting language strictly in terms of preferences. I think people sometimes have outright contradictory wants. I think people are subject to compulsive or semi-compulsive behaviors that make calling "revealed preference!" on their actions a risky business. The post you linked to (I can't quite tell by your phrasing if you are aware that I wrote it) is about setting priorities between various desiderata, not about declaring some of those desiderata unreal because they take a backseat.
0Morendil10yYup. Not sure if you mean to imply I've been saying that. That wasn't my intention.
0wedrifid10yThis all seems true with the exception of 'by contrast'. You seem to have clearly illustrated a similarity between weight loss and financial gain. There are things that are under people's control but which things are under a given person's control vary by the individual and the circumstances. In both cases people drastically overestimate the extent to which the outcome is a matter of 'choice'.
0Alicorn10yThe "by contrast" paragraph is meant to illustrate how and why I reject the disanalogy "from both directions".
0Amanojack10yThe real distinction is between what you want to do now and what you want your future self to do later, though there's some word confusion obscuring that point. English is pretty bad at dealing with these types of distinctions, which is probably why this is a recurring discussion item.
1Amanojack10yPeople aren't confused about what they want in any given moment. They want to eat donuts, but they don't want to have eaten donuts. They don't want to exercise, but they do want to have exercised.
3[anonymous]10yThis is a pretty good reason humans not as a single moral agent, but as a collection of past, present, and future moral agents.
-2XiXiDu10yOughts are instrumental and wants are terminal. See my comments here [http://lesswrong.com/lw/3ew/newtonmas_meetup_12252010/3858] and here [http://lesswrong.com/lw/5eh/what_is_metaethics/418u].
5timtyler10yDisagree - I don't think that is supported by the dictionary. For instance, I want more money - which is widely regarded as being instrumental. Maybe you need to spell out what you actually meant here.
1XiXiDu10yOughts and wants are not mutually exclusive in their first-order desirability. You ought to do what you want is a basic axiom of volition. That implies that you also want what you ought. Yet a distinction, if minor, between ought and want is that the former is often a second-order desire as it is instrumental to the latter primary goal.
-2timtyler10yWants are fairly straighforwads, but oughts are often tangled up with society, manipulation and signalling. You appear to be presuming some other definition of ought - without making it terribly clear what it is that you are talking about.
0XiXiDu10yWhen it comes to goals then in a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its ability to read and respond to environmental cues is vastly greater than that of a stone. And that is the reason why we perceive oughts to be mainly a fact about society, you ought not to be indifferent about the goals of other agents if they are instrumental to what you want. "Ought" statements are subjectively objective as they refer to the interrelationship between your goals and the necessary actions to achieve them. "Ought" statements point out the necessary consistency between means and ends. If you need pursue action X to achieve "want" Y you ought to want to do Y.

I'm taking a college metaethics class right now, and you have just neatly summarized everything it covers. Thanks!

It's a problem to assert that you've determined which of A and B is accurate, but that there isn't a way to determine which of A and B is accurate.

Edited to clarify: When I wrote this, the parent post started with the line "You say that like it's a problem."

-3Peterdjones10yI haven't asserted that any definition of "Morality" can jump through the hoops set up by NMJ and co. but there is an (averagely for Ordinary Language) inaccurate definition which is widely used.
3CuSithBell10yThe question in this thread was not "define Morality" but "explain how you determine which of "Killing innocent people is wrong barring extenuating circumstances" and "Killing innocent people is right barring extenuating circumstances" is morally right." (For people with other definitions of morality and / or other criteria for "rightness" besides morality, there may be other methods.)
-2Peterdjones10yThe question was rather unhelpfully framed in Jublowskian terms of "observable consequences". I think killing people is wrong because I don't want to be killed, and I don't want to Act on a Maxim I Would Not Wish to be Universal Law.
3NMJablonski10yMy name is getting all sorts of U's and W's these days. If there was a person who decided they did want to be killed, would killing become "right"?
-3Peterdjones10yDoes he want everyone to die? Does he want to kill them against their wished? Are multiple agents going to converge on that opinion?
3CuSithBell10yWhat are the answers under each of those possible conditions (or, at least, the interesting ones)?
-3Peterdjones10yWhy do you need me to tell you? Under normal circumstances the normal "murder is worng" answer will obtain -- that's the point.
3CuSithBell10yBecause I'm trying to have a discussion with you about your beliefs? Looking at this I find it hard to avoid concluding that you're not interested in a productive discussion - you asked a question about how to answer a question, got an answer, and refused to answer it anyway. Let me know if you wish to discuss with me as allies instead of enemies, but until and unless you do I'm going to have to bow out of talking with you on this topic.
-1Peterdjones10yI believe murder is wrong. I believe you can figure that out if you don't know it. The point of having a non-eliminative theory of ethics is that you want to find some way of supporting the common ethical intuitions. The point of asking questions is to demonstrate that it is possible to reason about morality: if someone answers the questions, they are doing the reasoning.
5JoshuaZ10yThis seems problematic. If that's the case, then your ethical system exists solely to support the bottom line. That's just rationalizing, not actual thinking. Moreover, is doesn't tell you anything helpful when people have conflicting intuitions or when you don't have any strong intuition, and those are the generally interesting cases.
-2Peterdjones10yA system that could support any conclusion would be useless, and a system that couldn't support the strongest and most common intuitions would be pretty incredible. A system that doesn't suffer from quodlibet isn't going to support both of a pair of contradictory intuitions. And that's pretty well the only way of resolving such issues. The rightness and wrongness of feelings can't help.
1JoshuaZ10ySo to make sure I understand, you are trying to make a system that agrees and supports with all your intuitions and you hope that the system will then give unambiguous answers where you don't have intuitions? I don't think that you realize how frequently our intuitions clash, not just the intuitions of different people, but even one's own intuitions (for most people at least). Consider for example train car problems. Most people whether or not they will pull the lever or push the fat person feel some intuition for either solution. And train problems are by far not the only example of a moral dilemma that causes that sort of issue. Many mundane, real life situations, such as abortion, euthanasia, animal testing, the limits of consent, and many other issues cause serious clashes of intuitions.
0Peterdjones10yI want a system that supports core intuitions. A consistent system can help to disambiguate intuitions.
3JoshuaZ10yAnd how do you decide which intuitions are "core intuitions"?
2CuSithBell10yIn this post [http://lesswrong.com/lw/5eh/what_is_metaethics/41jb]: "How do you determine which one is accurate?" In your response [http://lesswrong.com/lw/5eh/what_is_metaethics/41q2] further down the thread: "I am not dodging [that question]. I am arguing that [it is] inappropriate to the domain [...]" And then my post [http://lesswrong.com/lw/5eh/what_is_metaethics/41rf]: "But you already have determined that one of them is accurate, right?" That question was not one phrased in the way you object to, and yet you still haven't answered it. Though, at this point it seems one can infer (from the parent post) that the answer is something like "I reason about which principle is more beneficial to me."

This gets silly.

"Do you believe in woojits?" Well, no, I don't.

"Ah, well, if you disbelieve in woojits, then you must know what woojits are! So, what are woojits?" I have no idea.

"But how is that possible? If you don't have a definition for woojits, on what basis do you reject belief in them?" Having a well-defined notion of something is a prerequisite for belief in it; I don't have a well-defined notion of woojits; therefore I don't believe in woojits.

"No, no. You're confused. All woojit-disbelievers have to adopt a br... (read more)

You're aware that words have more than one definition, and in debates it is customary to define key terms before beginning? Perhaps I could interest you in this.

-1Peterdjones10yThe debate, which seems to be over, was largely about whether the word has any meaning at all,

You think that claiming to have no understanding at all of ordinary words is getting at reality?

It's almost never sufficient, but it is often necessary to discard wrong words.

-2Peterdjones10y..and it's necessary to have a reasoned motivation for that. If you could really disprove things just by unmotivated refusal to use language, you could disprove everything. Meta-principle: treat one-size-fits-all arguments with suspicion.
2Cyan10yAround here we call those "fully general counter-arguments [http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/]". ETA: you've misunderstood the grandparent, the point of which is not about a refusal to use language but rather about using it more precisely so as to avoid miscommunication and errors.
-2Peterdjones10yI have not noticed NMJabalonski offering a more precise replacement vocabulary.
8[anonymous]10yProbably because he doesn't know what to replace it with. You introduced the words into the conversation. We're trying to figure out what you mean by them.
1NMJablonski10yThis summarizes the situation nicely I think. Thanks.
5NMJablonski10yI have not been offering one. I have been requesting one. I don't see any substantive, real world connection to words like "good" or "moral" in this context. I am assuming you do mean something real by them, and I am asking you to convey that meaning by using simpler words that we both already understand in concrete terms.

I must confess I'm having trouble with that flowchart, specifically the first question about whether a moral judgment expresses a belief, and emotivism being on the "no" side. Doesn't, "Ew, murder" express the belief that murder is icky?

To put it another way, I'm having trouble reconciling the map of what people argue about the nature of morality, with what I know of how at least my brain processes moral belief and judgment.

That is, ISTM that moral judgments at the level where emotion and motivation are expressed do not carry any factu... (read more)

6wedrifid10yNo. The belief and that feeling and expression will be correlated but one is not the other. It isn't especially difficult or unlikely for them to different. It would be possible to declare a model in which the "Ew, murder" reaction is defined as an expression of belief. But it isn't a natural one and would not fit with the meaning of natural language.
4pjeby10yThat depends on how you define "belief". My definition is that a "belief" is a representation in your brain that you use to make predictions or judgments about reality. The emotion experienced in response to thinking of the prohibited or "icky" behavior is the direct functional expression of that belief. I have noticed that sometimes people on LW use the term "alief" to refer to such beliefs, but I don't consider that a natural usage. In natural usage, people refer to intellectual vs. emotional beliefs, rather than artificially limiting the term "belief" to only include verbal symbolism and abstract propositions.
0wedrifid10yThe definition as you actually write it here isn't bad. The conclusion just doesn't directly follow the way you say it does unless you modify that definition with some extra bits to make the world a simpler place.
1lukeprog10ywedrifid is correct. Another way to grok the distinction: Imagine that you were testifying at a murder trial, and somebody asked if you if you had killed your mother with a lawnmower. You reply "Lawnmower!" with a disgusted tone. Now, the prosecutor asks, "Do you mean to claim that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting?" And you could rightly reply, "It may be the case that I believe that lawnmower is X, or that the thought of killing somebody with a lawnmower is disgusting, but I have claimed no such things merely by saying 'Lawnmower!'"
7pjeby10yYou're speaking of claims in language; I'm speaking of brain function. Functionally, I have observed that the emotions behind such statements are an integral portion of the "belief", and that verbal descriptions of belief such as "murder is bad" or "you shouldn't murder" are attempts to explain or justify the feeling. (In practice, the things I work with are less morally relevant than murder, but the process is the same.) (See also your note that people continue to justify their judgments on the basis of confabulated consequences even when the situation has been specifically constructed to remove them as a consideration.)
3ata10yI don't think that's a belief. What factual questions would distinguish a world where murder is icky from one where murder is not icky?
2pjeby10yBeliefs can be wrong, but that doesn't make them non-beliefs. Any belief of the form "X is Y" (especially where Y is a judgment of goodness or badness) is likely either an instance of the mind projection fallacy, or a simple by-definition tautology. Again, however, this doesn't make it not-a-belief, it's just a mistaken or poorly-understood belief. (For example, expansion to "I find murder to be icky" trivially fixes the error.)

However, these claims are false, so you have to make a different argument.

I've seen this sort of substitution-argument a few times recently, so I'll take this opportunity to point out that arguments have contexts, and if it seems that an argument does not contain all the information necessary to support its conclusions (because directly substituting in other words produces falsehood), this is because words have meanings, steps are elided, and there are things true and false in the world. This does not invalidate those arguments! These elisions are in fact... (read more)

3TimFreeman10yThese substitution arguments are quite a shortcut. The perpetrator doesn't actually have to construct something that supports a specific point; instead, they can take an argument they disagree with, swap some words around, leave out any words that are inconvenient, post it, and if the result doesn't make sense, the perpetrator wins! Making a valid argument about why the substitution argument doesn't make sense requires more effort than creating the substitution argument, so if we regard discussions here as a war of attrition, the perpetrator wins even if you create a well-reasoned reply to him. Substitution arguments are garbage. I wish I knew a clean way to get rid of them. Thanks for identifying them as a thing to be confronted.
1CuSithBell10yCool, glad I'm not just imagining things! I think that sometimes this sort of argument can be valuable ("That person also has a subjective experience of divine inspiration, but came to a different conclusion", frex), but I've become more suspicious of them recently - especially when I'm tempted to use one myself.
2[anonymous]10yThing is, this is a general response to virtually any criticism whatsoever. And it's often true! But it's not always a terribly useful response. Sometimes it's better to make explicit that bit of context, or that elided step. Moreover it's also a good thing to remember about the other guy's argument next time you think his conclusions obviously do not follow from his (explicitly stated) premises - that is, next time you see what looks to you to be an invalid argument, it may not be even if strictly on a formal level it is, precisely because you are not necessarily seeing everything the other guy is seeing. So, it's not just about substitutions. It's a general point.
0CuSithBell10yTrue! This observation does not absolve us of our eternal vigilance. Emphatically agreed.

I agree with Sewing-Machine

Being bloodthirsty would lead to results I do not prefer.

ETA: Therefore I would not choose to become bloodthirsty. This is based on existing preference.

I do not believe there is a set of correct preferences. There is no objective right or wrong.

But science isn't about words like "exist", "true", or "false". Science is about words like "Frozen water is less dense than liquid water". I can point at frozen water, liquid water, and a particular instance of the former floating on the latter. Scientific claims were well-defined even before there was enough knowledge to evaluate them. I can't point at anything for claims about morality, so the analogy between ethics and science is not valid.

Come on people. Argument by analogy doesn't prove anything even when ... (read more)

-1Peterdjones10yYou can't point at anything for claims about pure maths either. That something is not empirical does not automatically invalidate it. Morality is not just social signalling, because it makes sense to say some social signals ("I am higher status than you because I have more slaves") are morally wrong.
-2wedrifid10yThat conclusion does not follow. Saying you have slaves is a signal about morality and, depending on the audience, often a bad signal.
0Peterdjones10yNote that there is a difference between "morality is about signalling" and "signalling is about morality". If I say "I am high status because I live a moral life" I am blatantly using morality to signal, but it doesn't remotely follow from that there is nothing to morality except signalling. It could be argued that, morally speaking, I should pursue morality for its own sake and not to gain status.
1wedrifid10yThat sounds like an effective signal to send - and a common one.
-1Eugine_Nier10yOnly because the force of the word "exists" is implicit in the indicative mood of the word "is". But they can help explain what people mean, and they can show argument prove too much. I could draw an equally complicate flow chart about what "truth" and "exists"/"is" might mean. The amount of consensus is roughly the same as the amount of consensus there was before the development of science about which statements are true and which aren't. People had strong opinions about truth before the concept of empirical validation was developed.
4Amanojack10yYour criticisms of "truth" are not so far off, but you're essentially saying that parts of science are wrong so you can be wrong, too. No actually, you think it is OK to flounder around in the field when you're just starting out. Sure, but not when you don't even know what it is you're supposed to be studying - if anything! This is not analogous to physics, where the general goal was clear from the very beginning: figure out what physical mechanisms underly macro-scale phenomena, such as the hardness of metal, conductivity, magnetic attraction, gravity, etc. You're just running around to whatever you can grab onto to avoid the main point that there is nothing close to a semblance of delineation of what this "field" is actually about, and it is getting tiresome.
-1Peterdjones10yI think the claim that ethicists don't know at all what they are studying is unfounded.
-1Eugine_Nier10yI believe this is hindsight bias [http://wiki.lesswrong.com/wiki/Hindsight_bias] .
0Amanojack10yUgg in 65,000 BC: Why water fire no mix? Why rock so hard? Why tree have shadow? Eugine in 2011: What is the True Theory of Something-or-Other?

I have access to a number of dictionaries which, while written entirely in English, contain many definitions. Please, emulate them.

-1Peterdjones10ymorality :concern with the distinction between good and evil or right and wrong; right or good conduct good:morally admirable Ethics (also known as moral philosophy) is a branch of philosophy which seeks to address questions about morality; that is, about concepts such as good and bad, right and wrong, justice, and virtue.
5Amanojack10yLet me try to guess the next few moves in hopes of speeding this up: A: Admirable according to whom? (And why'd you use "morally" in the definition of "morality"?) B: Most people. / Everyone. / Everyone who matters. A: So basically, if a lot of people or everyone admires something, it is morally good? It's a popularity contest? B: No, it's just objectively admirable. A: I don't understand what it would mean to be "objectively admirable"? B: These are two common words. How can you not understand them? A: Each might make sense separately, but together no. Perhaps you mean "universally admirable"? B: Yeah, that sounds good. A: So basically, if everyone admires something, you will want to call it "morally good." They will probably appreciate and agree to those approving words, seeing as they all admire it as well. Or...?
3NMJablonski10ySo... "Something is moral if it is good." and "Something is good if it is moral." ?
3Alicorn10yI think "admirable" might break the circle and ground the definitions, albeit tenuously.
4NMJablonski10yIt could, that's true. Only, I think, if we clear up who's doing the admiring. There would be disagreement among a lot of people as to what's admirable.

Where do the views expressed in the book The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It fit in? I'm assuming this is some form of non-cognitivism?

0lukeprog10yIt has been a few years since I read Greene's dissertation. I need to re-read it sometime. Perhaps someone else can answer...

It turned out you were wrong about this! However you'd like it phrased - you did an experiment that failed to confirm your hypothesis, you need to Notice Your Surprise, etc. - you should update on this information.


I'm going to give this one last shot. Can you explain, succinctly, what you're talking about when you say "morality"?

0Peterdjones10yconcern with the distinction between good and evil or right and wrong; right or good conduct
8NMJablonski10yWhat is it about conduct that makes it right and good as opposed to wrong and evil? What is it that determines these attributes, if not human preference?
1[anonymous]10yTaboo your words [http://lesswrong.com/lw/nu/taboo_your_words/]

I think, just like politics, this site should avoid the topic of ethics as much as possible. Most of the "science" of ethics is just post-Christian nonsense. Seriously, read Nietzsche. I don't trust any of this talk about ethics by someone who hasn't read, and understood, Nietzsche.

I reject your appeal to authority or sophistication. I also suggest you are confused about what discussion of metaethics entails.

The 'meta' implies that the discussions of ethics can be separated entirely from normative moralizing and be engaged with as a purely ep... (read more)

0torekp10yIf the meta in metaethics meant that, I'd say it's impossible, for roughly [http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0114.2006.00246.x/abstract] these [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=phimp;rgn=main;idno=3521354.0008.006] reasons.
2Sniffnoy10yThese links don't work.
0torekp10yThanks; fixed. Back to school for me on mouseover text.

You correctly point out problems with classical utilitarianism; nonetheless, downvoted for equating utilitarianism in general with classical utilitarianism in particular, as well as being irrelevant to the comment it was replying to. And a few other things.

... but they're held morally accountable by agents whose preferences have been violated. The way you just described it means that morality is just those rules that the people around you currently care enough about to punish you if you break them.

In which case morality is entirely subjective and contingent on what those around you happen to value, no?

-1Peterdjones10yIt can make sense to say that the person being punished was actually in the right. Were the British right to imprison Gandhi?
4JoshuaZ10yPeter, at this point, you seem very confused. You've asserted that morality is just like chess apparently comparing it to a game where one has agreed upon rules. You've then tried to assert that somehow morality is different and is a somehow more privileged game that people "should" play but the only evidence you've given is that in societies with a given moral system people who don't abide by that moral system suffer. Yet your comment about Gandhi then endorses naive moral realism. It is possible that there's a coherent position here and we're just failing to understand you. But right now that looks unlikely.
-2Peterdjones10yAs I have pointed out about three times, the comparison with chess was to make a point about obligation, not to make a point about arbitrariness I never gave that, that was someone else characterisation. What I said was that it is an anaytlcal trtuth that morality is where the evaluative buck stops. I don't know what you mean by the naive in naive realism. It is a a central characteristic of any kind of realism that you can have truth beyond conventional belief. The idea that there is more to morality than what a particular society wants to punish is a coherent one. It is better as morality, because subjectivism is too subject to get-out clauses. It is better as an explanation, because it can explain how de facto morality in societies and individuals can be overturned for something better.
0wedrifid10yYes, roughly speaking when the person being punished fits into the category 'us' rather than 'them'. Especially 'me'.

Non-cognitivists, in contrast, think that moral discourse is not truth-apt.

Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...

0lukeprog10yFair enough. I adjusted the wording in the original post. Thanks.

We can also ask whether some de facto behavior is really vorpal. That raises the question of what "really vorpal" means. Luckily, I can tell you what it really means: nothing at all.

If you claim the word "moral" means something that I - and most people who use that word - don't know that it means, then 1) you have to tell us what it means as the start of any discussion instead of asking us what it means, and 2) you should really use a new word for your new idea.

The study of those behaviours is descriptive ethics. The prescription of those behaviours is normative ethics.

Thanks for the correction.

0Peterdjones10yNegative solutions are possible, as I said. I didn't claim that.. I did say that a precise and correct definition requires coming up with a correct theory. But coming up with a correct theory only requires the imprecise pretheoretical definition, and everyone already has that. (I wasn't asking for it because I don't know it, I was asking for it to remind people that they already have it). If I had promised a correct theory, I would have implictly promised a post-theoretic definition to go with it. But I didn't make the first promise, so I am not commtted to the second. The whole thing is aimed as a correction to the ideas that you need to have, or can have, completely clear and accurate deffinitions from the get go. People should read carefully, and note that I never claimed to have a New Idea.
1DanArmak10yI take it you mean negative solutions to the question: does "morality" have a meaning we don't precisely know yet? What I'm saying is that it's your burden to show that we should be considering this question at all. It's not clear to me what this question means or how and why it arises in your mind. It's as if you said you were going to spend a year researching exactly what cars mean. And I asked: what does it mean for cars to "mean" something that we don't know? It's clearly not the same as saying the word "cars" refers to something, because it can't refer to something we don't know about; a word is defined only by the way we use it. And cars at least exist as physical objects, unlike morality. So before we talk about possible answers (or the lack of them), I'm asking you to explain to me the question being discussed. What does the question mean? What kind of objects can be the answer - can morality "mean" that ice cream is sweet, or is that wrong type of answer? What is the test used to judge if an answer is true or false? Is there a possibility two people will never agree even though one of their answers is objectively true (like in literature, and unlike in mathematics)? If we only have an inaccurate definition for morality right now, and someone proposes an accurate one, how can we tell if it's correct?
0Peterdjones10yNo, by negative answers, I mean things like error theories in metaethics. I think your other questions don't have obvious answers. If you think that the lack of obvious answers should lead to something like "ditch the whole thing", we could have a debate about that. Otherwise, you're not saying anything that hasn't been said already.

What is the difference between:

"Killing innocent people is wrong barring extenuating circumstances"


"Killing innocent people is right barring extenuating circumstances"

How do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate?

0Eugine_Nier10yMoral facts don't lead me to anticipate observable consequences, but they do affect the actions I choose to take.
3[anonymous]10yPreferences also do that.
0Eugine_Nier10yYes, well opinions also anticipate observations. But in a sense by talking about "observable consequences" your taking advantage of the fact that the meta-theory of science is currently much more developed then the meta-theory of ethics.
2CuSithBell10yThe question was - how do you determine what the moral facts are?
1Eugine_Nier10yCurrently, intuition. Along with the existing moral theories, such as they are. Similar to the way people determined facts about physics, especially facts beyond the direct observation of their senses, before the scientific method was developed.
5CuSithBell10yRight, and 'facts' about God. Except that intuitions about physics derive from observations of physics, whereas intuitions about morality derive from observations of... intuitions. You can't really argue that objective morality not being well-defined means that it is more likely to be a coherent notion.
3Eugine_Nier10yMy point is that you can't conclude the notion of morality is incoherent simple because we don't yet have a sufficiently concrete definition.
6CuSithBell10yTechnically, yes. But I'm pretty much obliged, based on the current evidence, to conclude that it's likely to be incoherent. More to the point: why do you think it's likely to be coherent?
5Eugine_Nier10yMostly by outside view [http://wiki.lesswrong.com/wiki/Outside_view] analogy with the history of the development of science. I've read a number of ancient Greek and Roman philosophers (along with a few post-modernists) arguing against the possibility of a coherent theory of physics using arguments very similar to the ones people are using against morality. I've also read a (much larger) number of philosophers trying to shoehorn what we today call science into using the only meta-theory then available in a semi-coherent state: the meta-theory of mathematics. Thus we see philosophers, Descartes being the most famous, trying and failing to study science by starting with a set of intuitively obvious axioms and attempting to derive physical statements from them. I think people may be making the same mistake by trying to force morality to use the same meta-theory as science, i.e., asking what experiences moral facts anticipate. As for likely I'm not sure how likely this is, I just think its more likely then a lot of people on this thread assume.
3CuSithBell10yTo be clear - you are talking about morality as something externally existing, some 'facts' that exist in the world and dictate what you should do, as opposed to a human system of don't be a jerk. Is that an accurate portrayal? If that is the case, there are two big questions that immediately come to mind (beyond "what are these facts" and "where did they come from") - first, it seems that Moral Facts would have to interact with the world in some way in order for the study of big-M Morality to be useful at all (otherwise we could never learn what they are), or they would have to be somehow deducible from first principles. Are you supposing that they somehow directly induce intuitions in people (though, not all people? so, people with certain biological characteristics?)? (By (possibly humorous, though not mocking!) analogy, suppose the Moral Facts were being broadcast by radio towers on the moon, in which case they would be inaccessible until the invention of radio. The first radio is turned on and all signals are drowned out by "DON'T BE A JERK. THIS MESSAGE WILL REPEAT. DON'T BE A JERK. THIS MESSAGE WILL...".) The other question is, once we have ascertained that there are Moral Facts, what property makes them what we should do? For instance, suppose that all protons were inscribed in tiny calligraphy in, say, French, "La dernière personne qui est vivant, gagne." ("The last person who is alive, wins" - apologies for Google Translate) Beyond being really freaky, what would give that commandment force to convince you to follow it? What could it even mean for something to be inherently what you should do? It seems, ultimately, you have to ask "why" you should do "what you should do". Common answers include that you should do "what God commands" because "that's inherently What You Should Do, it is By Definition Good and Right". Or, "don't be a jerk" because "I'll stop hanging out with you". Or, "what makes you happy and fulfilled, including the part of you that de
0Eugine_Nier10yNow we're getting somewhere. What do you mean by the work "jerk" and why is it any more meaningful then words like "moral"/"right"/"wrong"?
2CuSithBell10yThe distinction I am trying to make is between Moral Facts Engraved Into The Foundation Of The Universe and A Bunch Of Words And Behaviors And Attitudes That People Have (as a result of evolution & thinking about stuff etc.). I'm not sure if I'm being clear, is this description easier to interpret?
2Eugine_Nier10yNear as I can tell, what you mean by "don't be a jerk" is one possible example of what I mean by morality. Hope that helps.
2CuSithBell10yGreat! Then I think we agree on that.
3JGWeissman10yIf that is true, what virtue do moral fact have which is analogous to physical facts anticipating experience, and mathematical facts being formally provable?
0Eugine_Nier10yIf I knew the answer we wouldn't be having this discussion.
2Amanojack10yDefine your terms, then you get a fair hearing. If you are just saying the terms could maybe someday be defined, this really isn't the kind of thing that needs a response. To put it in perspective, you are speculating that someday you will be able to define what the field you are talking about even is. And your best defense is that some people have made questionable arguments against this non-theory? Why should anyone care?
0Eugine_Nier10yAfter thinking about it a little I think I can phrase it this way. I want to answer the question: "What should I do?" It's kind of a pressing question since I need to do something (doing nothing counts as a choice and usually not a very good one). If the people arguing that morality is just preference answer: "Do what you prefer", my next question is "What should I prefer?"
1[anonymous]10yThree definitions of "should": As for obligation - I doubt you are under any obligation other than to avoid the usual uncontroversially nasty behavior, along with any specific obligations you may have to specific people you know. You would know what those are much better than I would. I don't really see how an ordinary person could be all that puzzled about what his obligations are. As for propriety - over and above your obligation to avoid uncontroversially nasty behavior, I doubt you have much trouble discovering what's socially acceptable (stuff like, not farting in an elevator), and anyway, it's not the end of the world if you offend somebody. Again, I don't really see how an ordinary person is going to have a problem. As for expediency - I doubt you intended the question that way. If this doesn't answer your question in full you probably need to explain the question. The utilitarians have this strange notion that morality is about maximizing global utility, so of course, morality in the way that they conceive it is a kind of life-encompassing total program of action, since every choice you make could either increase or decrease total utility. Maybe that's what you want answered, i.e., what's the best possible thing you could be doing. But the "should" of obligation is not like this. We have certain obligations but these are fairly limited, and don't provide us with a life-encompassing program of action. And the "should" of propriety is not like this either. People just don't pay you any attention as long as you don't get in their face too much, so again, the direction you get from this quarter is limited.
-1Peterdjones10yYou have collapsed several meanings of obligation together there. You may have explicit legal obligations to the state, and IOU style obligations to individuals who have done you a favour, and so on. But moralobligations go beyond all those, If you are living a brutal dictatorship, there are conceivable circumstances where you morally should not obey the law. Etc, etc.
0Amanojack10yIn order to accomplish what? Should you prefer chocolate ice cream or vanilla? As far as ice cream flavors go, "What should I prefer" seems meaningless...unless you are looking for an answer like, "It's better to cultivate a preference for vanilla because it is slightly healthier" (you will thereby achieve better health than if you let yourself keep on preferring chocolate). This gets into the time structure of experience. In other words, I would be interpreting your, "What should I prefer?" as, "What things should I learn to like (in order to get more enjoyment out of life)?" To bring it to a more traditionally moral issue, "Should I learn to like a vegetarian diet (in order to feel less guilt about killing animals)?" Is that more or less the kind of question you want to answer?
0wedrifid10yIncluding the word 'just' misses the point. Being about preference in now way makes it less important.
0[anonymous]10yThis might have clarified for me what this dispute is about. At least I have a hypothesis, tell me if I'm on the wrong track. Antirealists aren't arguing that you should go on a hedonic rampage -- we are allowed to keep on consulting our consciences to determined the answer to "what should I prefer." In a community of decent and mentally healthy people we should flourish. But the main upshot of the antirealist position is that you cannot convince people with radically different backgrounds that their preferences are immoral and should be changed, even in principle. At least, antirealism gives some support to this cynical point of view, and it's this point of view that you are most interested in attacking. Am I right?
-2Eugine_Nier10yThat's a large part of it. The other problem is that anti-realists don't actually answer the question "what should I do?", they merely pass the buck to the part of my brain responsible for my preferences but don't give it any guidance on how to answer that question.
-1TimFreeman10yTalk about morality and good and bad clearly has a role in social signaling. It is also true that people clearly have preferences that they act upon, imperfectly. I assume you agree with these two assertions; if not we need to have a "what color is the sky?" type of conversation. If you do agree with them, what would you want from a meta-ethical theory that you don't already have?
1Eugine_Nier10ySomething more objective/universal. Edit: a more serious issue is that just as equating facts with opinions tells you nothing about what opinions you should hold. Equating morality and preference tells you nothing about what you should prefer.
7TimFreeman10ySo we seem to agree that you (and Peterdjones) are looking for an objective basis for saying what you should prefer, much as rationality is a basis for saying what beliefs you should hold. I can see a motive for changing one's beliefs, since false beliefs will often fail to support the activity of enacting one's preferences. I can't see a motive for changing one's preferences - obviously one would prefer not to do that. If you found an objective basis for saying what you should prefer, and it said you should prefer something different from what you actually do prefer, what would you do? If you live in a social milieu where people demand that you justify your preferences, I can see something resembling morality coming out of those justifications. Is that your situation? I'd rather select a different social milieu, myself.
3handoflixue10yI recently got a raise. This freed up my finances to start doing SCUBA diving. SCUBA diving benefits heavily from me being in shape. I now have a strong preference for losing weight, and reinforced my preference for exercise, because the gains from both activities went up significantly. This also resulted in having a much lower preference for certain types of food, as they're contrary to these new preferences. I'd think that's a pretty concrete example of changing my preferences, unless we're using different definitions of "preference."
1TimFreeman10yI suppose we are using different definitions of "preference". I'm using it as a friendly term for a person's utility function, if they seem to be optimizing for something, or we say they have no preference if their behavior can't be understood that way. For example, what you're calling food preferences are what I'd call a strategy or a plan, rather than a preference, since the end is to support the SCUBA diving. If the consequences of eating different types of food magically changed, your diet would probably change so it still supported the SCUBA diving.
2handoflixue10yAhh, I re-read the thread with this understanding, and was struck by this: It seems to me that the simplest way to handle this is to assume that people have multiple utility functions. Certain utility functions therefor obviously benefit from damaging or eliminating others. If I reduce my akrasia, my rationality, truth, and happiness values are probably all going to go up. My urge to procrastinate would likewise like to eliminate my guilt and responsibility. Presumably anyone who wants a metaethical theory has a preference that would be maximized by discovering and obeying that theory. This would still be weighted against their existing other preferences, same as my preference for rationality has yet to eliminate akrasia or procrastination from my life :) Does that make sense as a "motivation for wanting to change your preferences"?
3TimFreeman10yI agree that akrasia is a bad thing that we should get rid of. I like to think of it as a failure to have purposeful action, rather than a preference. My dancing around here has a purpose. You see, I have this FAI specification that purports to infer everyone's preference and take as its utility function giving everyone some weighted average of what they prefer. If it infers that my akrasia is part of my preferences, I'm screwed, so we need a distinction there. Check http://www.fungible.com [http://www.fungible.com]. It has a lot of bugs that are not described there, so don't go implementing it. Please. In general, if the FAI is going to give "your preference" to you, your preference had better be something stable about you that you'll still want when you get it. If there's no fix for akrasia, then it's hard to say in what sense I want to do something worthwhile but am stopped by akrasia; it makes as much sense to assume I'm spewing BS about stuff that sounds nice to do, but I really don't want to do it. I certainly would want an akrasia fix if it were available. Maybe that's the important preference.
2TheOtherDave10yVery much agreed.
0TimFreeman10yAt the end of the day, you're going to prefer one action over another. It might make sense to model someone as having multiple utility functions, but you also have to say that they all get added up (or combined some other way) so you can figure out the immediate outcome with the best preferred expected long-term utility and predict the person is going to take an action that gets them there.
1handoflixue10yI don't think very many people actually act in a way that suggests consistent optimization around a single factor; they optimize for multiple conflicting factors. I'd agree that you can evaluate the eventual compromise point, and I suppose you could say they optimize for that complex compromise. For me, it happens to be easier to model it as conflicting desires and a conflict resolution function layered on top, but I think we both agree on the actual result, which is that people aren't optimizing for a single clear goal like "happiness" or "lifetime income". Prediction seems to run in to the issue that utility evaluations change over time. I used to place a high utility value on sweets, now I do not. I used to live in a location where going out to an event had a much higher cost, and thus was less often the ideal action. So on. It strikes me as being rather like weather: You can predict general patterns, and even manage a decent 5-day forecast, but you're going to have a lot of trouble making specific long-term predictions.
1Peterdjones10yThere isn't an instrumental motive for changing ones preferences. That doesn't add up to "never change your preferences" unless you assume that instrumentality -- "does it help me achieve anything" is the ultimate way of evaulating things. But it isn't: morality is. It is morally wrong to design better gas chambers.
1TimFreeman10yThe interesting question is still the one you didn't answer yet: I only see two possible answers, and only one of those seems likely to come from you (Peter) or Eugene. The unlikely answer is "I wouldn't do anything different". Then I'd reply "So, morality makes no practical difference to your behavior?", and then your position that morality is an important concept collapses in a fairly uninteresting way. Your position so far seems to have enough consistency that I would not expect the conversation to go that way. The likely answer is "If I'm willpower-depleted, I'd do the immoral thing I prefer, but on a good day I'd have enough willpower and I'd do the moral thing. I prefer to have enough willpower to do the moral thing in general." In that case, I would have to admit that I'm in the same situation, except with a vocabulary change. I define "preference" to include everything that drives a person's behavior, if we assume that they aren't suffering from false beliefs, poor planning, or purposeless behavior (like a seizure, for example). So if your behavior is controlled by a combination of preference and morality, then what I'm calling "preference" is the same as what you're calling "preference and morality". I am in the same situation in that when I'm willpower-depleted I do a poor job of acting upon consistent preferences (using my definition of the word), I do better when I have more willpower, and I want to have more willpower in general. If I guessed your answer wrong, please correct me. Otherwise I'd want to fix the vocabulary problem somehow. I like using the word "preference" to include all the things that drive a person, so I'd prefer to say that your preference has two parts, perhaps an "amoral preference" which would mean what you were calling "preference" before, and "moral preference" would include what you were calling "morality" before, but perhaps we'd choose different words if you objected to those. The next question would be: ...and I have no
0Eugine_Nier10yFollow morality. One way to illustrate this distinction is using Eliezer's "murder pill". If you were offered a pill that would reverse and/or eliminate a preference would you take it (possibly the offer includes paying you)? If the preference is something like preferring vanilla to chocolate ice cream, the answer is probably yes. If the preference is for people not to be murdered the answer is probably no. One of the reasons this distinction is important is that because of the way human brains are designed, thinking about your preferences can cause them to change. Furthermore, this phenomenon is more likely to occur with high level moral preferences, then with low level amoral preferences.
1TimFreeman10yIf that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted. Now if the thoughts people had about moral preferences that make them change were actually empirically meaningful and consistent with observation, rather than verbal manipulation consisting of undefinable terms that can't be nailed down even with multiple days of Q&A, that would be worthwhile and not just a statement about psychology. But if we had such statements to make about morality, we would have been making them all this time and there would be clarity about what we're talking about, which hasn't happened.
-1Eugine_Nier10yThat's not a definition of morality but an explanation of one reason why the "murder pill" distinction is important.
-1Peterdjones10yBut preference itself is influenced by reasoning and experience. The Preference theory focuses on proximate causes, but there are more distal ones too. I am not and never was using "preference" to mean something disjoint from morality. If some preferences are moral preferences, then whole issue of morality is not disposed of by only talking about preferences. That is not an argument for nihilism or relativism. You could have an epistemology where everything is talked about as belief, and the difference between true belief and false belief is ignored. If by a straightforward answer. you mean an answer framed in terms of some instrumental value that i fulfils, I can't do that. I can only continue to challenge the frame itself. Morality is already, in itself, the most important value. It isn't "made" important by some greater good.
1TimFreeman10yThere's a choice you're making here, differently from me, and I'd like to get clear on what that choice is and understand why we're making it differently. I have a bunch of things I prefer. I'd rather eat strawberry ice cream than vanilla, and I'd rather not design higher-throughput gas chambers. For me those two preferences are similar in kind -- they're stuff I prefer and that's all there is to be said about it. You might share my taste in ice cream and you said you share my taste in designing gas chambers. But for you, those two preferences are different in kind. The ice cream preference is not about morality, but designing gas chambers is immoral and that distinction is important for you. I hope we all agree that the preference not to design high-throughput gas chambers is commonly and strongly held, and that it's even a consensus in the sense that I prefer that you prefer not to design high-throughput gas chambers. That's not what I'm talking about. What I'm talking about is the question of why the distinction is important to you. For example, I could define the preferences of mine that can be easily desscribed without using the letter "s" to be "blort" preferences, and the others to be non-blort, and rant about how we all need to distinguish blort preferences from non-blort preferences, and you'd be left wondering "Why does he care?" And the answer would be that there is no good reason for me to care about the distinction between blort and non-blort preferences. The distinction is completely useless. A given concept takes mental effort to use and discuss, so the decision to use or not use a concept is a pragmatic one: we use a concept if the mental effort of forming it and communicating about it is paid for by the improved clarity when we use it. The concept of blort prefrerences does not improve the clarity of our thoughts, so nobody uses it. The decision to use the concept of "morality" is like any other decision to define and use a concept. We should u
2Peterdjones10yYou've written quite a lot of words but you're still stuck on the idea that all importance is instrumental importance, importance for something that doesn't need to be impoitant in itself. You should care about morality because it is a value and values are definitionally what is important and what should be cared about. If you suddenly started liking vanilla.nothing important would change. You wouldn't stop being you, and your new self wouldn't be someone your old self would hate. That wouldn't be the case if you suddenly started liking murder or gas chambers. You don't now like people who like those things, and you wouldn't now want to become one. If we understand what is going on , we should make the choice correctly -- that is, according to rational norms. If morality means something other than the merely pragmatic, we should not label the pragmatic as the moral. And it must mean something different because it is an open, investigatable question whether some instrumentally useful thing is also ethically good, whereas questions like "is the pragmatic useful" are trivial and tautologous.
3TimFreeman10yYou're not getting the distinction between morality-the-concept-worth-having and morality-the-value-worth-enacting. I'm looking for a useful definition of morality here, and if I frame what you say as a definition you seem to be defining a preference to be a moral preference if it's strongly held, which doesn't seem very interesting. If we're going to have the distinction, I like Eugene's proposal that a moral preference is one that's worth talking about better, but we need to make the distinction in such a way that something doesn't get promoted to being a moral preference just because people are easily deceived about it. There should be true things to say about it.
0Peterdjones10yBut what I actually gave as a definition is the concept of morality is the concept of ultimate value and importance. A concept which even the nihilists need so that they can express their disbelief in it. A concept which even social and cognitive scientists need so they can describe the behaviour surrounding it.
0TimFreeman10yYou are apparently claiming there is some important difference between a strongly held preference and something of ultimate value and importance. Seems like splitting hairs to me. Can you describe how those two things are different?
0Peterdjones10yJust because you do have a stongly held preference, it doesn't mean you should. The difference between true beliefs and fervently held ones is similar.
0TimFreeman10yOne can do experiments to determine whether beliefs are true, for the beliefs that matter. What can one do with a preference to figure out if it should be strongly held? If that question has no answer, the claim that the two are similar seems indefensible.
-2Peterdjones10yWhat makes them matter? Reason about it?
0TimFreeman10yEmpirical content. That is, a belief matters if it makes or implies statements about things one might observe. Can you give an example? I tried to make one at http://lesswrong.com/lw/5eh/what_is_metaethics/43fh [http://lesswrong.com/lw/5eh/what_is_metaethics/43fh], but it twisted around into revising a belief instead of revising a preference.
-2Peterdjones10ySo it doesn't matter if it only affects what you will do?
1TimFreeman10yIf I'm thinking for the purpose of figuring out my future actions, that's a plan, not a belief, since planning is relevant when I haven't yet decided what to do. I suppose beliefs about other people's actions are empirical. I've lost the relevance of this thread. Please state a purpose if you wish to continue, and if I like it, I'll reply.
1[anonymous]10yOkay, that seems clear enough that I'd rather pursue that than try to get an answer to any of my previous questions, even if all we may have accomplished here is to trade Eugene's evasiveness for Peter's. If you know that morality is the ultimate way of evaluating things, and you're able to use that to evaluate a specific thing, I hope you are aware of how you performed that evaluation process. How did you get to the conclusion that it is morally wrong to design better gas chambers? Execution techniques have improved over the ages. A guilliotine (sp?) is more compassionate than an axe, for example, since with an axe the executioner might need a few strokes, and the experience for the victim is pretty bad between the first stroke and the last. Now we use injections that are meant to be painless, and perhaps they actually are. In an environment where executions are going to happen anyway, it seems compassionate to make them happen better. Are you saying gas chambers, specifically, are different somehow, or are you saying that designing the guilliotine was morally wrong too and it would have been morally preferable to use an axe during the time guilliotines were used?
1[anonymous]10yI'm pretty sure he means to refer to high-throughput gas chambers optimized for purposes of genocide, rather than individual gas chambers designed for occasional use. He may or may not oppose the latter, but improving the former is likely to increase the number of murders committed.
0TimFreeman10yAgreed, so I deleted my post to avoid wasting Peter's time responding.
1Eugine_Nier10yLet's try a different approach. I have spent some time thinking about how to apply the ideas of Eliezer's metaethics sequence [http://wiki.lesswrong.com/wiki/Metaethics_sequence] to concrete ethical dilemmas. One problem that quickly comes up is that as PhilGoetz points out here [http://lesswrong.com/lw/55n/human_errors_human_values/], the distinction between preferences and biases is very arbitrary. So the question becomes how do you separate which of your intuitions are preferences and which are biases?
0TimFreeman10yWell, valid preferences look like they're derived from a utility function that says how much I prefer different possible future world-states, and uncertainty about the future should interact with the utility function in the proper way. Biases are everything else. I don't see how that question is relevant. I don't see any good reason for you to dodge my question about what you'd do if your preferences contradicted your morality. It's not like it's an unusual situation -- consider the internal conflicts of a homosexual Evangelist preacher, for example.
0Peterdjones10yWhat makes your utility function valid? If that is just preferences, then presumably it is going to work circularly and just confirm your current preferences, If it works to iron out inconsistencies, or replace short term preferences with long term ones, that would seem to be the sort of thing that could be fairly described as reasoning.
2TimFreeman10yI don't judge it as valid or invalid. The utility function is a description of me, so the description either compresses observations of my behavior better than an alternative description, or it doesn't. It's true that some preferences lead to making more babies or living longer than other preferences, and one may use evolutionary psychology to guess what my preferences are likely to be, but that is just a less reliable way of guessing my preferences than from direct observation, not a way to judge them as valid or invalid. A utility function that assigns utility to long-term outcomes rather than sort term outcomes might lead to better survival or baby-making, but it isn't more or less valid than one that cares about the short term. (Actually, if you only care about things that are too far away for you to effectively plan, you're in trouble, so long-term preferences can promote survival less than shorter term ones, depending on the circumstances.) This issue is confused by the fact that a good explanation of my behavior requires simultaneously guessing my preferences and my beliefs. The preference might say I want to go to the grocery store, and I might have a false belief about where it is, so I might go the wrong way and the fact that I went the wrong way isn't evidence that I don't want to go to the grocery store. That's a confusing issue and I'm hoping we can assume for the purposes of discussion about morality that the people we're talking about have true beliefs.
2Peterdjones10yIf it were, it would include your biases, but you were saying that your UF determines your valid preferences as opposed to your biases. The question is whether everything in your head is a preference-like thing or a belief-like thing, or whether there are also process such as reasoning and reflection that can change beliefs and preferences.
1TimFreeman10yI'm not saying it's a complete description of me. To describe how I think you'd also need a description of my possibly-false beliefs, and you'd also need to reason about uncertain knowledge of my preferences and possibly-false beliefs. In my model, reasoning and reflection can change beliefs and change the heuristics I use for planning. If a preference changes, then it wasn't a preference. It might have been a non-purposeful activity (the exact schedule of my eyeblinks, for example), or it might have been a conflation of a belief and a preference. "I want to go north" might really be "I believe the grocery store is north of here and I want to go to the grocery store". "I want to go to the grocery store" might be a further conflation of preference and belief, such as "I want to get some food" and "I believe I will be able to get food at the grocery store". Eventually you can unpack all the beliefs and get the true preference, which might be "I want to eat something interesting today".
0Peterdjones10yThat still doesn't explain what the difference between your prefernces and your biases is. That's rather startling. Is it a fact about all preferences that they hold from birth to death? What about brain plasticity?
1TimFreeman10yIt's a term we're defining because it's useful, and we can define it in a way that it holds from birth forever afterward. Tim had the short-term preference dated around age 3 months to suck mommy's breast, and Tim apparently has a preference to get clarity about what these guys mean when they talk about morality dated around age 44 years. Brain plasticity is an implementation detail. We prefer simpler descriptions of a person's preferences, and preferences that don't change over time tend to be simpler, but if that's contradicted by observation you settle for different preferences at different times. I suppose I should have said "If a preference changes as a consequence of reasoning or reflection, it wasn't a preference". If the context of the statement is lost, that distinction matters.
1Peterdjones10ySo you are defining "preference" in a way that is clearly arbitrary and possibly unempirical...and complaining about the way moral philosophers use words?
0CuSithBell10yI agree! Consider, for instance, taste in particular foods. I'd say that enjoying, for example, coffee, indicates a preference. But such tastes can change, or even be actively cultivated (in which case you're hemi-directly altering your preferences). Of course, if you like coffee, you drink coffee to experience drinking coffee, which you do because it's pleasurable - but I think the proper level of unpacking is "experience drinking coffee", not "experience pleasurable sensations", because the experience being pleasurable is what makes it a preference in this case. That's how it seems to me, at least. Am I missing something?
0wedrifid10y"The proper way" being built in as a part of the utility function and not (necessarily) being a simple sum of the multiplication of world-state values by their probability.
0Eugine_Nier10yUm, no. Unless you are some kind of mutant who doesn't suffer from scope insensitivity [http://wiki.lesswrong.com/wiki/Scope_insensitivity] or any of the related biases your uncertainty about the future doesn't interact with your preferences in the proper way until you attempt to coherently extrapolate them. It is here that the distinction between a bias and a valid preference becomes both important and very arbitrary. Here is the example PhilGoetz gives in the article I linked above: I believe I answered your other question elsewhere in the thread [http://lesswrong.com/lw/5eh/what_is_metaethics/43ak].

That is sort of half true, but it feels like you're just saying that to say it, as there have been criticisms of this same line of reasoning that you haven't answered.

How about the fact that beliefs about physics actually pay rent? Do moral ones?

Why not try this: imagine an inquisitive nine-year-old asked you what you meant by "morality"; such a nine-year-old might not know what "define" means, but I expect you wouldn't refuse to explain morality on those grounds.

-1Peterdjones10yI would only have to point to the distinction between Good Things and Naughty Things which all children have drummed into them from a much earlier age. That is what makes the claim not to have an OL undesrtanding of morality so unlikely.
7Cyan10yImagine your nine-year-old interlocutor pointing out that not all children have the same Good Things and Naughty Things drummed into them.
-2Peterdjones10ySo? You seem to think I am arguing for one particular theory.
4Cyan10yBecause of the above, I think you are making a claim that a singular Correct Theory of Morality exists. How would you explain that to a nine-year-old? That's the discussion we could be having.

We all speak English here to some degree.

The issue is that some words are floating, disconnected from anything in reality, and meaningless. Consider the question: do humans have souls?

What would it mean, in terms of actual experience, for humans to have souls? What is a soul? Can you understand how if someone refused to explain what a soul is, claiming it to be a basic thing which no other words can describe, it would be pretty confusing?

What would it mean, in terms of actual experience, for something to be "morally right"? What characteristics make it that way, and how do you know?

0Peterdjones10yTo disbelieve in souls, you have to know what "soul" means, You seem to have mistaken an issue of truth for one of meaning. I think you are going to have to put up with that unfortunate confusion, since you can't reduce everything to nothing. Something is morally right if it fulfils the Correct Theory of Morality. I'm not claiming to have that. However, I can recognise theories of morality, and I can do that with my ordinary-language notiion of morality. (The theoretic is always based on the pre-theoretic. We do not reach the theoretic in one bound) I'm not creating stumbling blocks for myself by placing arbitrary requirments on definitions, like insisting that they are both concrete and reductive.
1NMJablonski10yWhy do you believe there exists a Correct Theory of Morality?
1Eugine_Nier10yWhy do you believe there exists a Correct Theory of Physics? As Constant points out here [http://lesswrong.com/lw/5eh/what_is_metaethics/41ex] all the arguments based on reductionism that you're using could just as easily be used to argue that there is no correct theory of physics. One difference between physics and morality is that there is currently a lot more consensus about what the correct theory of physics looks like then what the correct theory of morality looks like. However, that is a statement about the current time, if you were to go back a couple centuries you'd find that there was as little consensus about the correct theory of physics as there is today about the correct theory of morality.
5Amanojack10yIt's not an argument by reductionism...it's simply trying to figure out how to interpret the words people are using - because it's really not obvious. It only looks like reductionism because someone asks, "What is morality?" and the answer comes: "Right and wrong," then "What should be done," then "What is admirable"... It is all moralistic language that, if any of it means anything, it all means the same thing.
-1Eugine_Nier10yWell the original argument [http://lesswrong.com/lw/5eh/what_is_metaethics/418n] , way back in the thread, was NMJablonski arguing against the existence of a "Correct Theory of Morality" by demanding that Peter provide "a clear reductionist description of what [he's] talking about" while "tabooing words like 'ethics', 'morality', 'should', etc. My point is that NMJablonski's request is about as reasonable as demanding that someone arguing for the existence of a "Correct Theory of Physics" provide a clear reductionist description of what one means while tabooing words like 'physics', 'reality', 'exists', 'experience', etc.
4Amanojack10yFair enough, though I suspect that by asking for a "reductionist" description NMJablonski may have just been hoping for some kind of unambiguous wording.
-2Eugine_Nier10yMy point, and possibly Peter's, is that given our current state of knowledge about meta-ethics I can give no better definition of the words "should"/"right"/"wrong" than the meaning they have in everyday use. Note, following my analogy with physics, that historically we developed a systematic way for judging the validity of statements about physics, i.e., the scientific method, several centuries before developing a semi-coherent meta-theory of physics, i.e., empiricism and Bayseanism. With morality we're not even at the "scientific method" stage.
4[anonymous]10yThis is consistent with Jablonski's point that "it's all preferences."
-1Eugine_Nier10yIn keeping with my physics analogy, saying "it's all preferences" about morality is analogous to saying "it's all opinion" about physics.

Clearly there's a group of people who dislike what I've said in this thread, as I've been downvoted quite a bit.

I'm not perfectly clear on why. My only position at any point has been this:

I see a universe which contains intelligent agents trying to fulfill their preferences. Then I see conversations about morality and ethics talking about actions being "right" or "wrong". From the context and explanations, "right" seems to mean very different things. Like:

"Those actions which I prefer" or "Those actions which most agents in a particular place prefer" or "Those actions which fulfill arbitrary metric X"

Likewise, "wrong" inherits its meaning from whatever definition is given for "right". It makes sense to me to talk about preferences. They're important. If that's what people are talking about when they discuss morality, then that makes perfect sense. What I do not understand is when people use the words "right" or "wrong" independently of any agent's preferences. I don't see what they are referring to, or what those words even mean in that context.

Does anyone care to explain what I'm missing, or if there's something specific I did to elicit downvotes?

5wedrifid10yYou signaled disagreement with someone about morality. What did you expect? :)
3NMJablonski10yYour explanation is simple and fits the facts! I like it :)
2Perplexed10yI don't know anything about downvotes, but I do think that there is a way of understanding 'right' and 'wrong' independently of preferences. But it takes a conceptual shift. Don't think of morality as a doctrine guiding you as to how to behave. Instead, imagine it as a doctrine teaching you how to judge the behavior of others (and to a lesser extent, yourself). Morality teaches you when to punish and reward (and when to expect punishment and reward). It is a second-order concept, and hence not directly tied to preferences.
4XiXiDu10ySociology? Psychology? Game theory? Mathematics? What does moral philosophy add to the sciences that is useful, that helps us to dissolve confusion and understand the nature of reality?
8Perplexed10yMoral philosophy, like all philosophy, does nothing directly to illuminate the nature of reality. What it does is to illuminate the nature of confusion. How does someone who thinks that 'morality' is meaningless discuss the subject with someone who attaches meaning to the word? Answer: They talk to each other carefully and respectfully. What do you call the subject matter of that discussion? Answer: Metaethics. What do you call success in this endeavor? Answer: "Dissolving the confusion".
2XiXiDu10yMoral philosophy does not illuminate the nature of confusion, it is the confusion. I am asking, what is missing and what confusion is left if you disregard moral philosophy and talk about right and wrong in terms of preferences?
4Perplexed10yI'm tempted to reply that what is missing is the ability to communicate with anyone who believes in virtue ethics or deontological ethics, and therefore doesn't see how preferences are even involved. But maybe I am not understanding your point. Perhaps an example would help. Suppose I say, "It is morally wrong for Alice to lie to Bob." How would you analyze that moral intuition in terms of preferences. Whose preferences are we talking about here? Alice's, Bob's, mine, everybody else's? For comparison purposes, also analyze the claim "It is morally wrong for Bob to strangle Alice."
7XiXiDu10yDue to your genetically hard-coded intuitions about appropriate behavior within groups of primates, your upbringing, cultural influences, rational knowledge about the virtues of truth-telling and preferences involving the well-being of other people, you feel obliged to influence the intercourse between Alice and Bob in a way that persuades Alice to do what you want, without feeling inappropriately influenced by you, by signaling your objection to certain behaviors as an appeal to the order of higher authority . If you say, "I don't want you to strangle Alice.", Bob might reply, "I don't care what you want!". If you say, "Strangling Alice might have detrimental effects on your other preferences.", Bob might reply, "I assign infinite utility to the death of Alice!" (which might very well be the case for humans in a temporary rage). But if you say, "It is morally wrong to strangle Alice.", Bob might get confused and reply, "You are right, I don't want to be immoral!". Which is really a form of coercive persuasion. Since when you say, "It is morally wrong to strangle Alice.", you actually signal, "If you strangle Alice you will feel guilty.". It is a manipulative method that might make Bob say, "You are right, I don't want to be immoral!", when what he actually means is, "I don't want to feel guilty!". Primates don't like to be readily controled by other primates. To get them to do what you want you have to make them believe that, for some non-obvious reason, they actually want to do it themselves.
-1Perplexed10yThis sounds like you are trying to explain-away the phenomenon, rather than explain it. At the very least, I would think, such a theory of morality needs to make some predictions or explain some distinctions. For example, what is it about the situation that causes me to try to influence Alice and Bob using moral arguments in these cases, whereas I use other methods of influence in other cases?
3XiXiDu10yComplex influences, like your culture and upbringing.That's also why some people don't say that it is morally wrong to burn a paperback book while others are outraged by the thought. And those differences and similarities can be studied, among other fields, in terms of cultural anthropology and evolutionary psychology. It needs a multidisciplinary approach to tackle such questions. But moral philosophy shouldn't be part of the solution because it is largely mistaken about cause and effect. Morality is an effect of our societal and cultural evolution, shaped by our genetically predisposition as primates living in groups. In this sense moral philosophy is a meme that is part of a larger effect and therefore can't be part of a reductionist explanation of itself. The underlying causes of cultural norms and our use of language can be explained by social and behavioural sciences, applied mathematics like game theory, computer science and linguistics.
-1Amanojack10yGuilt works here, for example. (But XiXiDu covered that.) Social pressure also. Veiled threat and warning, too. Signaling your virtue to others as well. Moral arguments are so handy that they accomplish all of these in one blow. ETA: I'm not suggesting that you in particular are trying to guilt trip people, pressure them, threaten them, or signal. I'm saying that those are all possible explanations as to why someone might prefer to couch their arguments in moral terms: it is more persuasive (as Dark Arts) in certain cases. Though I reject moralist language if we are trying to have a clear discussion and get at the truth, I am not against using Dark Arts to convince Bob not to strangle Alice.
0Jonathan_Graehl10yPerplexed wrote earlier: Sometimes you'll want to explain why your punishment of others is justified. If you don't want to engage Perplexed's "moral realism", then either you don't think there's anything universal enough (for humans, or in general) in it to be of explanatory use in the judgments people actually make, or you don't think it's a productive system for manufacturing (disingenuous yet generally persuasive) explanations that will sometimes excuse you.
2Amanojack10yAssuming I haven't totally lost track of context here, I think I am saying that moral language works for persuasion (partially as Dark Arts), but is not really suitable for intellectual discourse.
0Jonathan_Graehl10yOkay. Whatever he hopes is real (but you think is only confused), will allow you to form persuasive arguments to similar people. So it's still worth talking about.
-1Amanojack10yVirtue ethicists and deontologists merely express a preference for certain codes of conduct because they believe adhering to these codes will maximize their utility, usually via the mechanism of lowering their time preference [http://en.wikipedia.org/wiki/Time_preference]. ETA: And also, as XiXiDu points out, to signal virtuosity.
1Amanojack10yUpvoted because I strongly agree with the spirit of this post, but I don't think moral philosophy succeeds in dissolving the confusion. So far it has failed miserably, and I suspect that it is entirely unnecessary. That is, I think this is one field that can be dissolved away.
1XiXiDu10yLike if an atheist is talking to a religious person then the subject matter is metatheology?
3NMJablonski10yWhich metrics do I use to judge others? There has been some confusion over the word "preference" in the thread, so perhaps I should use "subjective value". Would you agree that the only tools I have for judging others are subjective values? (This includes me placing value on other people reaching a state of subjective high value) Or do you think there's a set of metrics for judging people which has some spooky, metaphysical property that makes it "better"?
6XiXiDu10yAnd why would that even matter as long as I am able to realize what I want without being instantly struck by thunder if I desire or do something that violates the laws of morality? If I live a happy and satisfied life of fulfilled preferences but constantly do what is objectively wrong, why exactly would that matter, to whom would it matter and why would I care if I am happy and my preferences are satisfied? Is it some sort of game that I am losing, where those who are the most right win? What if I don't want to play that game, what if I don't care who wins?
1Perplexed10yBecause it harms other people directly or indirectly. Most immoral actions have that property. To the person you harm. To the victim's friends and relatives. To everyone in the society which is kept smoothly running by the moral code which you flout. Because you will probably be punished, and that tends to not satisfy your preferences. If the moral code is correctly designed, yes. Then you are, by definition, irrational, and a sane society will eventually lock you up as being a danger to yourself and everyone else.
4XiXiDu10yBegging the question. Either that is part of my preferences or it isn't. Either society is instrumental to my goals or it isn't. Game theory? Instrumental rationality? Cultural anthropology? If I am able to realize my goals, satisfy my preferences, don't want to play some sort of morality game with agreed upon goals and am not struck by thunder once I violate those rules, why would I care? What is your definition of irrationality? I wrote that if I am happy, able to reach all of my goals and satisfy all of my preferences while constantly violating the laws of morality, how am I irrational?
2Jonathan_Graehl10yAlso, what did you mean by ... in response to "Because you will probably be punished, and that tends to not satisfy your preferences." ? I think you mean that you should correctly predict the odds and disutility (over your life) of potential punishments, and then act rationally selfishly. I think this may be too computationally expensive in practice, and you may not have considered the severity of the (unlikely event) that you end up severely punished by a reputation of being an effectively amoral person. Yes, we see lots of examples of successful and happy unscrupulous people in the news. But consider selection effects (that contradiction of conventional moral wisdom excites people and sells advertisements).
1XiXiDu10yI meant that we already do have a field of applied mathematics and science that talks about those things, why do we need moral philosophy? I am not saying that it is a clear cut issue that we, as computationally bounded agents, should abandon moral language, or that we even would want to do that. I am not advocating to reduce the complexity of natural language. But this community seems to be committed to reductionism, minimizing vagueness and the description of human nature in terms of causal chains. I don't think that moral philosophy fits this community. This community doesn't talk about theology either, it talks about probability and Occam's razor. Why would it talk about moral philosophy when all of it can be described in terms of cultural anthropology, sociology, evolutionary psychology and game theory?
0timtyler10yIt is a useful umbrella term - rather like "advertising".
-2Peterdjones10yCan all of it be described in those terms? Isn't that a philosophical claim?
2Jonathan_Graehl10yThere's nothing to dispute. You have a defensible position. However, I think most humans have as part of what satisfies them (they may not know it until they try it), the desire to feel righteous, which can most fully be realized with a hard-to-shake belief. For a rational person, moral realism may offer this without requiring tremendous self-delusion. (disclaimer: I haven't tried this). Is it worth the cost? Probably you can experiment. It's true that if you formerly felt guilty and afraid of punishment, then deleting the desire to be virtuous (as much as possible) will feel liberating. In most cases, our instinctual fears are overblown in the context of a relatively anonymous urban society. Still, reputation matters, and you can maintain it more surely by actually being what you present yourself as, rather than carefully (and eventually sloppily and over-optimistically) weighing each case in terms of odds of discovery and punishment. You could work on not feeling bad about your departures from moral perfection more directly, and then enjoy the real positive feeling-of-virtue (if I'm right about our nature), as well as the practical security. The only cost then would be lost opportunities to cheat. It's hard to know who to trust as having honest thoughts and communication on the issue, rather than presenting an advantageous image, when so much is at stake. Most people seem to prefer tasteful hypocrisy and tasteful hypocrites. Only those trying to impress you with their honesty, or those with whom you've established deep loyalties, will advertise their amorality.
0Peterdjones10yIt's irrational to think that the evaluative buck stops with your own preferences.
2nshepperd10yMaybe he doesn't care about the "evaluative buck", which while rather unfortunate, is certainly possible.
2Peterdjones10yIf he doesn't care about rationality, he is still being irrational,
1Perplexed10yI'm claiming that there is a particular moral code which has the spooky game-theoretical property that it produces the most utility for you and for others. That is, it is the metric which is Pareto optimal and which is also a 'fair' bargain.
2NMJablonski10ySo you're saying that there's one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I'm not convinced. Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved. The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent's utility. It all comes down to subjective values. There exists no other motivating force.
2Perplexed10yTrue, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the 'Golden Rule' of "Do unto others as you would have others do unto you." Tell that guy that moral behavior changes if preferences change. He will respond "Well, duh! What is your point?".
3NMJablonski10yThere are people who do not recognize this. It was, in fact, my point. Edit: Hmm, did I say something rude Perplexed?
1Perplexed10yNot to me. I didn't downvote, and in any case I was the first to use the rude "duh!", so if you were rude back I probably deserved it. Unfortunately, I'm afraid I still don't understand your point. Perhaps you were rude to those unnamed people who you suggest "do not recognize this".
5NMJablonski10yI think we may have reached the somewhat common on LW point where we're arguing even though we have no disagreement.
2Jonathan_Graehl10yIt's easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they're smart :)
1Amanojack10yI'm fond of including clarification like, "subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to "be good")." Some ways I've found to dissolve people's language back to subjective utility: 1. If someone says something is good, right, bad, or wrong, ask, "For what purpose?" 2. If someone declares something immoral, unjust, unethical, ask, "So what unhappiness will I suffer as a result?" But use sparingly, because there is a big reason [http://lesswrong.com/lw/5i7/on_being_okay_with_the_truth/] many people resist dissolving this confusion.
0AlephNeil10yYes! That's a point that I've repeated so often to so many different people [not on LW, though] that I'd more-or-less "given up" - it began to seem as futile as swatting flies in summer. Maybe I'll resume swatting now I know I'm not alone.
2Swimmer96310yThis is mainly how I use morality. I control my own actions, not the actions of other people, so for me it makes sense to judge my own actions as good or bad, right or wrong. I can change them. Judging someone else changes nothing about the state of the world unless I can persuade them to act differently.
3[anonymous]10yAvoiding a person (a) does not (necessarily) persuade them to act differently, but (b) definitely changes the state of the world. This is not a minor nitpicking point. Avoiding people is also called social ostracism, and it's a major way that people react to misbehavior. It has the primary effect of protecting themselves. It often has the secondary effect of convincing the ostracized person to improve their behavior.
2Swimmer96310yThen I would consider that a case where I could change their behaviour. There are instances where avoiding someone would bother them enough to have an effect, and other cases where it wouldn't.
5[anonymous]10yAvoiding people who misbehave will change the state of the world even if that does not affect their behavior. It changes the world by protecting you. You are part of the world.
0Perplexed10yYes, but if you judge a particular action of your own to be 'wrong', then why should you avoid that action? The definition of wrong that I supply solves that problem. By definition if an action is wrong, then it is likely to elicit punishment. So you have a practical reason for doing right rather than doing wrong. Furthermore, if you do your duty and reward and/or punish other people for their behavior, then they too will have a practical reason to do right rather than wrong. Before you object "But that is not morality!", ask yourself how you learned the difference between right and wrong.
0Swimmer96310yIt's a valid point that I probably learned morality this way. I think that's actually the definition of 'preconventional' morality-it's based on reward/punishment. Maybe all my current moral ideas have roots in that childhood experience, but they aren't covered by it anymore. There are actions that would be rewarded by most of the people around me, but which I avoid because I consider there to be a "better" alternative. (I should be able to think of more examples of this, but I guess one is laziness at work. I feel guilty if I don't do the cleaning and maintenance that needs doing even though everyone else does almost nothing. I also try to follow a "golden rule" that if I don't want something to happen to me, I won't do it to someone else even if the action is socially acceptable amidst my friends and wouldn't be punished.
2Perplexed10yAh. Thanks for bringing up the Kohlberg stages [http://en.wikipedia.org/wiki/Kohlberg%27s_stages_of_moral_development] - I hadn't been thinking in those terms. The view of morality I am promoting here is a kind of meta-pre-conventional viewpoint. That is, morality is not 'that which receives reward and punishment', it is instead 'that which (consequentially) ought to receive reward and punishment, given that many people are stuck at the pre-conventional level'.
0Swimmer96310yHow many people? I think (I remember reading in my first-year psych textbook) that most adults functionning at a "normal" level in society are at the conventional level: they have internalized whatever moral standards surround them and obey them as rules, rather than thinking directly of punishment or reward. (They may still be thinking indirectly of punishment and reward; a conventionally moral person obeys the law because it's the law and it's wrong to break the law, implicitly because they would be punished if they did.) I'm not really sure how to separate how people actually reason on moral issues, versus how they think they do, and whether the two are often (or ever???) the same thing.
0Perplexed10yHow many people are stuck at that level? I don't know. How many people must be stuck there to justify the use of punishment as deterrent? My gut feeling is that we are not punishing too much unless the good done (to society) by deterrence is outweighed by the evil done (to the 'criminal') by the punishment. And also remember that we can use carrots as well as sticks. A smile and a "Thank you" provide a powerful carrot to many people. How many? Again, I don't know, but I suspect that it is only fair to add these carrot-loving pre-conventionalists in with the ones who respond only to sticks.
0Perplexed10yCool! Swat away. Though I'm not particularly happy with the metaphor.
0Marius10yAssuming Amanojack explained your position correctly, then there aren't just people fulfilling their preferences. There are people doing all kinds of things that fulfill or fail to fulfill their preferences - and, not entirely coincidentally, which bring happiness and grief to themselves or others. So then a common reasonable definition of morality (that doesn't involve the word preferences) is that set of habits that are most likely to bring long-term happiness to oneself and those around one.
2NMJablonski10yYou missed a word in my original. I said that there were agents trying to fulfill their preferences. Now, per my comment at the end of your subthread with Amanojack, I realize that the word "preferences" may be unhelpful. Let me try to taboo it: There are intelligent agents who assign higher values to some futures than others. I observe them generally making an effort to actualize those futures, but sometimes failing due to various immediate circumstances, which we could call cognitive overrides. What I mean by that is that these agents have biases and heuristics which lead them to poorly evaluate the consequences of actions. Even if a human sleeping on the edge of a cliff knows that the cliff edge is right next to him, he will jolt if startled by noise or movement. He may not want to fall off the cliff, but the jolt reaction occurs before he is able to analyze it. Similarly, under conditions of sufficient hunger, thirst, fear, or pain, the analytical parts of the agent's mind give way to evolved heuristics. If that's how you would like to define it, that's fine. Would you agree then, that the contents of that set of habits is contingent upon what makes you and those around you happy?
0Marius10yI suspect it's a matter of degree rather than either-or. People sleeping on the edges of cliffs are much less likely to jot when startled than people sleeping on soft beds, but not 0% likely. The interplay between your biases and your reason is highly complex. Yes; absolutely. I suspect that a coherent definition of morality that isn't contingent on those will have to reference a deity.
2NMJablonski10yWe are, near as I can tell, in perfect agreement on the substance of this issue. Aumann would be proud. :)
0Marius10yI don't understand what you mean by preferences when you say "intelligent agents trying to fulfill their preferences". I have met plenty of people who were trying to do things contrary to their preferences. Perhaps before you try (or someone tries for you) to distinguish morality from preferences, it might be helpful to distinguish precisely how preferences and behavior can differ?
5Amanojack10yExample? I prefer not to stay up late, but here I am doing it. It's not that I'm acting against my preferences, because my current preference is to continue typing this sentence. It's simply that English doesn't differentiate very well between "current preferences"= "my preferences right this moment" and "current preferences"= "preferences I have generally these days." Seinfeld said it best [http://www.youtube.com/watch?v=hb63PdobcZ0].
-1Marius10yBut I want an example of people acting contrary to their preferences, you're giving one of yourself acting according to your current preferences. Hopefully, NMJablonski has an example of a common action that is genuinely contrary to the actor's preferences. Otherwise, the word "preference" simply means "behavior" to him and shouldn't be used by him. He would be able to simplify "the actions I prefer are the actions I perform," or "morality is just behavior", which isn't very interesting to talk about.
1Amanojack10y"This-moment preferences" are synonymous with "behavior," or more precisely, "(attempted/wished-for) action." In other words, in this moment, my current preferences = what I am currently striving for. Jablonski seems to be using "morality" to mean something more like the general preferences that one exhibits on a recurring basis, not this-moment preferences. And this is a recurring theme: that morality is questions like, "What general preferences should I cultivate?" (to get more enjoyment out of life)
0Marius10yOk, so if I understand you correctly: It is actually meaningful to ask "what general preferences should I cultivate to get more enjoyment out of life?" If so, you describe two types of preference: the higher-order preference (which I'll call a Preference) to get enjoyment out of life, and the lower-order "preference" (which I'll call a Habit or Current Behavior rather than a preference, to conform to more standard usage) of eating soggy bland french fries if they are sitting in front of you regardless of the likelihood of delicious pizza arriving. So because you prefer to save room for delicious pizza yet have the Habit of eating whatever is nearby and convenient, you can decide to change that Habit. You may do so by changing your behavior today and tomorrow and the day after, eventually forming a new Habit that conforms better to your preference for delicious foods. Am I describing this appropriately? If so, by the above usage, is morality a matter of Behavior, Habit, or Preference?
4Amanojack10ySounds fairly close to what I think Jablonski is saying, yes. Preference isn't the best word choice. Ultimately it comes down to realizing that I want different things at different times, but in English future wanting is sometimes hard to distinguish from present wanting, which can easily result in a subtle equivocation. This semantic slippage is injecting confusion into the discussion. Perhaps we have all had the experience of thinking something like, "When 11pm rolls around, I want to want to go to sleep." And it makes sense to ask, "How can I make it so that I want to go to sleep when 11pm rolls around?" Sure, I presently want to go to sleep early tonight, but will I want to then? How can I make sure I will want to? Such questions of pure personal long-term utility seem to exemplify Jablonksi's definition of morality.
0Marius10yok cool, replying to the original post then.
1NMJablonski10yOops, I totally missed this subthread. Amanojack has, I think, explained my meaning well. It may be useful to reduce down to physical brains and talk about actual computational facts (i.e. utility function) that lead to behavior rather than use the slippery words "want" or "preference".
1Amanojack10yGood idea. Like, "My present utility function calls for my future utility function to be such and such"?
1NMJablonski10yI replied to Marius higher up in the thread with my efforts at preference-taboo.
-2Peterdjones10ySame here. It doesn't mean any of those things, since any of them can be judged wrong. Morality is about having the right preferences, as rationality is about having true beliefs. Do you think the sentence "there are truths no-one knows" is meaningful?
2NMJablonski10yI understand what it would mean to have a true belief, as truth is noticeably independent of belief. I can be surprised, and I can anticipate. I have an understanding of a physical world of which I am part, and which generates my experiences. It does not make any sense for there to be some "correct" preferences. Unlike belief, where there is an actual territory to map, preferences are merely a byproduct of the physical processes of intelligence. They have no higher or divine purpose which demands certain preferences be held. Evolution selects for those which aid survival, and it doesn't matter if survival means aggression or cooperation. The universe doesn't care. I think you and other objective moralists in this thread suffer from extremely anthropocentric thinking. If you rewind the universe to a time before there are humans, in a time of early expansion and the first formation of galaxies, does there exist then the "correct" preferences that any agent must strive to discover? Do they exist independent of what kinds of life evolve in what conditions? If you are able to zoom out of your skull, and view yourself and the world around you as interesting molecules going about their business, you'll see how absurd this is. Play through the evolution of life on a planetary scale in your mind. Be aware of the molecular forces at work. Run it on fast forward. Stop and notice the points where intelligence is selected for. Watch social animals survive or die based on certain behaviors. See the origin of your own preferences, and why they are so different from some other humans. Objective morality is a fantasy of self-importance, and a hold-over from ignorant quasi-religious philosophy which has now cloaked itself in scientific terms and hides in university philosophy departments. Physics is going to continue to play out. The only agents who can ever possibly care what you do are other physical intelligences in your light cone.
1Vladimir_Nesov10yFor the record, I think in this thread Eugine_Nier follows a useful kind of "simple truth", not making errors as a result, while some of the opponents demand sophistication in lieu of correctness.
3NMJablonski10yI think we're demanding clarity and substance, not sophistication. Honestly I feel like one of the major issues with moral discussions is that huge sophisticated arguments can emerge without any connection to substantive reality. I would really appreciate it if someone would taboo the words "moral", "good", "evil", "right", "wrong", "should", etc. and try to make the point using simpler concepts that have less baggage and ambiguity.
0Vladimir_Nesov10yClarity can be difficult. What do you mean by "truth"? [http://yudkowsky.net/rational/the-simple-truth]
3NMJablonski10yI mean it in precisely the sense that The Simple Truth does. Anticipation control.
3Vladimir_Nesov10yThat's not the point. You must use your heuristics even if you don't know how they work, and avoid demanding to know how they work or how they should work as a prerequisite to being allowed to use them. Before developing technical ideas about what it means for something to be true, or what it means for something to be right, you need to allow yourself to recognize when something is true, or is right.
5NMJablonski10yI'm sorry, but if we had no knowledge of brains, cognition, and the nature of preference, then sure, I'd use my feelings of right or wrong as much as the next guy, but that doesn't make them objectively true. Likewise, just because I intuitively feel like I have a time-continuous self, that doesn't make consciousness fundamental. As an agent, having knowledge of what I am, and what causes my experiences, changes my simple reliance on heuristics to a more accurate scientific exploration of the truth.
0Vladimir_Nesov10yJust make sure that the particular piece of knowledge you demand is indeed available, and not, say, just the thing you are trying to figure out.
6NMJablonski10y(Nod) I still think it's a pretty simple case here. Is there a set of preferences which all intelligent agents are compelled by some force to adopt? Not as far as I can tell.
-2Peterdjones10yMorality doesn't work like physical law either. Nobody is compelled to be rational, but people who do reason can agree about certain things. That includes moral reasoning.
1nshepperd10yI think we should move this conversation back out of the other post [http://lesswrong.com/lw/m8/the_amazing_virgin_pregnancy/41xb], where it really doesn't belong. Can you clarify what you mean by this? For what X are you saying "All agents that satisfy X must follow morality."?
3TheOtherDave10yIf you're moving it anyway, I would recommend moving it here [http://lesswrong.com/lw/ho/consolidated_nature_of_morality_thread/] instead.
0Peterdjones10yI'm saying that in "to be moral you must to follow whatever rules constitute morality" the "must" is a matter of logical necessity, as opposed to the two interpretations of compulsion considered by NMJ: physical necessity, and edict.
2JoshuaZ10yYou still haven't explained in this framework why one can talk about how one gets that people "should" be moral anymore than people "should play chess". If morality is just another game, then it loses all the force you associate with it, and it seems clear that you are distinguishing between chess and morality.
1nshepperd10yHmm... This is reminiscent of Eliezer's (and my) metaethics¹ [http://wiki.lesswrong.com/wiki/Metaethics_sequence]. In particular, I would say that "the rules that constitute morality" are, by the definition embedded in my brain, some set which I'm not exactly sure of the contents of but which definitely includes {kindness, not murdering, not stealing, allowing freedom, ...}. (Well, it may actually be a utility function, but sets are easier to convey in text.) In that case, "should", "moral", "right" and the rest are all just different words for "the object is in the above set (which we call morality)". And then "being moral" means "following those rules" as a matter of logical necessity, as you've said. But this depends on what you mean by "the rules constituting morality", on which you haven't said whether you agree. What do you think?
1NMJablonski10yWhat determines the contents of the set / details of the utility function?
3nshepperd10yThe short answer is: my/our preferences (suitably extrapolated). The long answer is: it exists as a mathematical object regardless of anyone's preference, and one can judge things by it even in an empty universe. The reason we happen to care about this particular object is because it embodies our preferences, and we can find out exactly what object we are talking about by examining our preferences. It really adds up to the same thing, but if one only heard the short answer they might think it was about preferences, rather than described by them. But anyway, I think I'm mostly trying to summarise the metaethics sequence by this point :/ (probably wrongly :p)
3NMJablonski10yI see what you mean, and I don't think I disagree. I think one more question will clarify. If your / our preferences were different, would the mathematical set / utility function you consider to be morality be different also? Namely, is the set of "rules that constitute morality" contingent upon what an agent already values (suitably extrapolated)?
1nshepperd10yNo. On the other hand, me!pebble-sorter would have no interest in morality at all, and go on instead about how p-great p-morality is. But I wouldn't mix up p-morality with morality.
2NMJablonski10ySo, you're defining "morality" as an extrapolation from your preferences now, and if your preferences change in the future, that future person would care about what your present self might call futureYou-morality, even if future you insists on calling it "morality"?
0JGWeissman10yNo matter what opinions anyone holds about gravity, objects near the surface of the earth not subject to other forces accelerate towards the earth at 9.8 meters per second per second. This is an empirical fact about physics, and we know ways our experience could be different if it were wrong. Do you have an example of a fact about morality, independent of preferences, such that we could notice if it is wrong?
0Amanojack10yI don't think you can explicate such a connection, especially not without any terms defined. In fact, it is just utterly pointless to try to develop a theory in a field that hasn't even been defined in a coherent way. It's not like it's close to being defined, either. For example, "Is abortion morally wrong?" combines about 12 possible questions into it because it has a least that many interpretations. Choose one, then we can study that. I just can't see how otherwise rationality-oriented people can put up with such extreme vagueness. There is almost zero actual communication happening in this thread in the sense of actually expressing which interpretation of moral language anyone is taking. And once that starts happening it will cover way too many topics to ever reach a resolution. We're simply going to have to stop compressing all these disparate-but-subtly-related concepts into a single field, taboo all the moralist language, and hug some queries (if any important ones actually remain).
0Eugine_Nier10yIn any science I can think of people began developing it using intuitive notions, only being able to come up with definitions after substantial progress had been made.
2TimFreeman10yYou can assume that the words have no specific meaning and are used to signal membership in a group. This explains why the flowchart in the original post has so many endpoints about what morality might mean. It explains why there seems to be no universal consensus on what specific actions are moral and which ones are not. It also explains why people have such strong opinions about morality despite the fact that statements about morality are not subject to empirical validation.
2TimFreeman10yNo, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
-1Eugine_Nier10yI could point a gun to your head and ask you to explain why I shouldn't pull the trigger.
8TimFreeman10yThat scenario doesn't lead to discovering the truth. If I deceive you with bullshit and you don't pull the trigger, that's a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements. ETA: ...unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that's what you mean by morality, you probably agree with me that it is just social signaling.
2Desrtopa10yAnd if he gave a true moral argument you would have to accept it [http://lesswrong.com/lw/rn/no_universally_compelling_arguments/]? How would you distinguish a true argument from a merely persuasive one?
2Eugine_Nier10yLike I mentioned elsewhere in this thread, the "No Universally Compelling Argument" post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post). In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn't mean truth doesn't exist. As Eliezer mentions in where recursive justification hits bottom [http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/]:
3Desrtopa10yA formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness. We can understand what sort of assumptions we're making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?
1NMJablonski10yA correct theory of physics would inform my anticipations.
0Eugine_Nier10yPlease, taboo "anticipations".
2NMJablonski10yReplace anticipations with: My ability, as a mind (subjective observer), to construct an isomorphism in memory that corresponds to future experiences.
-1Peterdjones10yI think "X is what the correct theory of X says" is true for all X. The Correct Theory can say "Nothing", of course.

Refusal to define the key terms that make or break your argument never ends well.

Why should we think that there are categorical rights and wrongs?

I just don't see any convincing reason to believe they exist.

EDIT: Not to mention, it isn't clear what it would mean - in a real physical sense - for something to be categorically right or wrong.

Can you demonstrate that what you just said is true?

EDIT: And perhaps provide a definition of "ought"?

Your substantive point is nonsensical. My physical, real world understanding of intelligent agents includes preferences. It does not include anything presently labeled "morality" and I have no idea what I would apply that label to.

I don't think you have anything concrete down there that you're talking about (I'd be excited to be wrong about this). So you can do your little philosophers dance in a world of poorly anchored words but I'm not going to take you seriously until you start talking about reality.

[-][anonymous]10y 3

I don't understand the question, nor why you singled out that fragment.

-1Eugine_Nier10yWhen you say "Even if there's no such thing as objective right and wrong" you're still implicitly presuming a default morality, namely ethical egoism.

Wait. So you don't believe in an objective notion of morality, in the sense of a morality that would be true even if there were no people? Instead, you think of morality as, like, a set of reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their "selfishness" a desire for the well-being of others?

-3Peterdjones10yEverything is non objective for some value of objective. It is doubtful that there are mathematical truths without mathematicians. But that does not make math as subjective as art.
3CuSithBell10yOkay. The distinction I am drawing is: are moral facts something "out there" to be discovered, self-justifying, etc., or are they facts about people, their minds, their situations, and their relationships. Could you answer the question for that value of objective? Or, if not, could you answer the question by ignoring the word "objective" or providing a particular value for it?
-3Peterdjones10yThe second is closer, but there is still the issue of the fact-value divide. ETA: I have a substantive pre-written article on this, but where am I going to post it with my karma...?
0CuSithBell10yI translate that as: it's better to talk about "moral values" than "moral facts" (moral facts being facts about what moral values are, I guess), and moral values are (approximately) reasonable principles a person can figure out that prevent their immediate desires from stomping on their well-being, and/or that includes in their "selfishness" a desire for the well-being of others. Something like that? If not, could you translate for me instead?
-2Peterdjones10yI think the the fact that moral values apply to groups is important.
2CuSithBell10yI take this to mean that, other than that, you agree. (This is the charitable reading, however. You seem to be sending strong signals that you do not wish to have a productive discussion. If this is not your intent, be careful - I expect that it is easy to interpret posts like this as sending such signals.) If this is true, then I think the vast majority of the disagreements you've been having in this thread have been due to unnecessary miscommunication.

"Wrong" meaning what?

Would I prefer the people around me not be bloodthirsty? Yes, I would prefer that.

-3Peterdjones10yCan people reason that bloodthirst is not a good preference to have...?
5[anonymous]10yEven if there's no such thing as objective right and wrong, they might easily be able to reason that being bloodthirsty is not in their best selfish interest.
1wedrifid10yFor me, now, it isn't practical. In other circumstances it would be. It need not ever be a terminal goal but it could be an instrumental goal built in deeply.

Do you think mathematical statements are true and false? Do you think mathematics has an actual territory?

Mathematics is not Platonically real. If it is we get Tegmark IV and then every instant of sensible ordered universe is evidence against it, unless we are Boltzmann brains. So, no, mathematics does not have an actual territory. It is an abstraction of physical behaviors that intelligences can use because intelligences are also physical. Mathematics works because we can perform isomorphic physical operations inside our brains.

It is plainly the case

... (read more)
3jimrandomh10yOnly if is-real is a boolean. If it's a number, then mathematics can be "platonically real" without us being Boltzmann brains.
1NMJablonski10yUpvoted. That's a good point, but also a whole other rabbit hole. Do you think morality is objective?
3[anonymous]10yAs opposed to what? Subjective? What are the options? Because that helps to clarify what you mean by "objective". Prices are created indirectly by subjective preferences and they fluctuate, but if I had to pick between calling them "subjective" or calling them "objective" I would pick "objective", for a variety of reasons.
2jimrandomh10yNo; morality reduces to values that can only be defined with respect to an agent, or a set of agents plus an aggregation process. However, almost all of the optimizing agents (humans) that we know about share some values in common, which creates a limited sort of objectivity in that most of the contexts we would define morality with respect to agree qualitatively with each other, which usually allows people to get away with failing to specify the context.
1NMJablonski10yUpvoted. I think you could get a decent definition of the word "morality" along these lines.
-1Peterdjones10yA person can know that by reasoning about it. If you think there is nothing wrong with having a preference for murder, it is about time you said so. It changes a lot.
2NMJablonski10yIt still isn't clear what it means for a preference for murder to be "wrong"! So far I can only infer your definition of "wrong" to be: "Not among the correct preferences" ... but you still haven't explained to us why you think there are correct preferences, besides to stamp your foot and say over and over again "There are obviously correct preferences" even when many people do not agree. I see no reason to believe that there is a set of "correct" preferences to check against.

What is weasel-like with "near the surface of the earth"?

-1Eugine_Nier10yIn this context, it's as "weasel-like" as "innocent". In the sense that both are fudge factors you need to add to the otherwise elegant statement to make it true.

But earlier you indicated that asking what a woojit is requires accepting the notion of woojits as coherent.

things any adult English speaker knows

... while no two adult English speakers agree on what precisely those things are.

0Peterdjones10y...although they will agree approximately. "it's not maximally precise from the get go" is a generalised counterargument.
1JoshuaZ10yIt is not at all obvious that this is a good enough approximation to deal with any interesting situation.

I'm not sure it's possible for my example to be wrong anymore then its possible for 2+2 to equal 3.

What would it take to convince you your example is wrong?

Note how "2+2=4" has observable consequences:

Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared - in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind,

... (read more)

I would be happy to continue down this line a ways longer if you would like, and we could get all the way down to the two of us in the same physical location rebuilding the concept of induction. I am confident that if necessary we could do that for "anticipations" and build our way back up. I am not confident that "morality" as it has been used here actually connects to any solid surface in reality, unless it ends up meaning the same thing as "preferences".

Do you disagree?

Why should we think that there are categorical rights and wrongs?

I just don't see any convincing reason to believe they exist.

We do think there are categorical rights and wrongs [...]

What you should notice about this exchange is that you've made an incorrect prediction, and that therefore there might be something wrong with your model.

-2Peterdjones10yI suppose you mean I incorrectly roped in NMJ. But I don't think s/he is statistically significant,and then there is the issue of sincerity. Does NMJ really think it is good to design an improved gas chamber?
4CuSithBell10yWhat I mean is that you predicted that "we" think there are categorical rights and wrongs, and you were incorrect (more than just NMJablonski disagree with you). Moreover, the fact that you seem to think "is it good to design an improved gas chamber" is inherently about "categorical rights and wrongs" indicates either dishonest argumentation or a failure to understand the position of your interlocutor.
-1Peterdjones10yI didn't predict anything about what my interlocutors think: I made an accurate comment about ordinary people at large. think what I said is that it is about categorical rights and wrongs if it is about anything. NMJ seems to think it is about nothing. If you think it is about something else,you need to say what:: I cannot guess.
2CuSithBell10yYou cannot guess? Do you not see the irony in making this request? Here is the situation: people often use a single word (such as 'good') to mean many different things. Thus, if you wish to use the word to mean something in particular - especially in an argument about that word! - you might have to define your own meaning. Besides - the behemoth Opal ("ordinary people at large") is a poor judge of many things.
-1Peterdjones10yMaking the categorical/hypothetical distinction is a way of refining the meaning. I'm already there (although I am getting accused of pedantry for my efforts).

You haven't demonstrated that the basis for every ought statement is what you believe to be correct with respect to your goals.

Imagine your friend tells you that he found a new solution to reach one of your goals. If you doubt that his solution is better than your current solution then you won't adopt your friends solution.

It is true that both your solutions might be incorrect, that there might exist a correct solution that you ought (would want) to embrace if you knew about it. But part of what you want is to do what you believe to be correct. It is a... (read more)

lukeprog, where would you place David Gauthier in your flow chart?

Some cognitivists think that [...] Other cognitivists think that [...]

Is there a test of the real world that could tell us that some of them are right and others think wrong? If not, what is the value of describing their thoughts?

It's clear to me that applied and normative ethics deal with real and important questions. They are, respectively, heuristics for certain situations, and analysis of possible failure modes of these heuristics.

But I don't understand what metaethics deals with. You write:

Metaethics: What does moral language mean? Do moral fa

... (read more)
5[anonymous]10yThis is nicely put. I second the request: what is a metaethical question that could have a useful answer? It would be especially nice if the usefulness was clear from the question itself, and not from the answer that lukeprog is preparing to give.
0Peterdjones10yExact definitions are easy to come by, so long as you are not bothered about correctness. Let morality=42, for instance. If you are bothered about correctness, you need to solve metaethics, the question of what morality is, before you can exactly and correctly define "morality". I can understand the impatience with philosophy -- "why can't they just solve these problems"--because that was my reaction when I first encountered it some 35 years ago. Did I solve philosophy? I only managed to nibble away at some edges. That's all anyone ever manages.
9wedrifid10yHow dare you! 42 isn't even prime [http://lesswrong.com/lw/sy/sorting_pebbles_into_correct_heaps/], let alone right.
4DanArmak10yThe problem isn't that I don't know the answer. The problem is that I don't understand the question. "Morality" is a word. "Understanding morality" is, first of all, understanding what people mean when they use that word. I already know the answer to that question: they mean a complex set of evolved behaviors that have to do with selecting and judging behaviors and other agents. Now that I've answered that question, if you claim there is a further unanswered question, you will need to specify what it is exactly. Otherwise it's no different from saying we must "solve the question of what a Platonic ideal is". There are many important questions about morality that need to be answered - how exactly people make moral decisions, how to predict and manipulate them, how to modify our own behavior to be more consistent, etc. But these are part of applied and normative ethics. I don't understand what metaethics is.
-2Peterdjones10yUnderstanding morality is second of all deciding what, if anything, it actually is. Water actually is H2O, but you can use the word without knowing that, and you can't find out what water is just by studying how the word is used.
2DanArmak10yI think you don't understand my question. "Water" is H2O. And we can study H2O. "Morality" is a complex set of evolved behaviors, etc. We can study those behaviors. This is (ETA:) descriptive ethics. What is metaethics, though? And do you think there are questions to be asked about morals which are not questions about the different human behaviors that are sometimes labeled as morally relevant? Do you think there exists something in the universe, independent of human beings and the accidents of our evolution, that is called "morals"? The original post indicated that some philosophers think so.

Where does pluralistic moral reductionism go on the flowchart?

6Wei_Dai10yGiven that Luke named his theory "pluralistic moral reductionism", Eliezer said his theory is closest to "moral functionalism", and Luke said his views are similar to Eliezers, I think one can safely deduce that it belongs somewhere around the bottom of the chart, not far away from "analytic moral functionalism" and "standard moral reductionism". :)
1endoself10yBased on how I would answer the questions listed and that my views are similar to Eliezer's, I agree. The last question, as I understand it, is equivalent to "If you had a full description of all possible worlds, could you then say which choices are right in each world? Say "no" if you instead think that you would you have to additionally actually observe the real world to make moral choices." I might be misunderstanding something, since this seems like an obvious "yes", but I might be understanding 'too much', perhaps by conflating two things that some philosophers claim to be different due to their confusion.
0lukeprog10yIt doesn't fit anywhere on the chart cuz it's just so freaking meta, yo. :)

But don't most philosophers do that: try to assemble all the other philosophers' positions in a chart while maintaining that his own position is too nuanced to be assigned a point on a chart :)

3lukeprog10yMy tone was facetious, but the content of my sentence above was literal. I don't think it's an advantage that my theory does or doesn't fit neatly on the above chart. It's just that my theory of metaethics doesn't quite have the same aims or subject matter as the theories presented on this chart. But anyway, you'll see what I mean once I have time to finish writing up the sequence...
0Amanojack10yPerhaps, but another general trend in philosophy seems to be that people spend centuries arguing over definitions. Anyone who points that out will be necessarily making a meta-critique and hence not be a point on a chart (not that lukeprog's theory will necessarily be like that; just have to wait and see).
2gjm6yIt isn't; someone might perfectly well hold that ethical sentences express both propositions and emotional attitudes. But those people would not be classified as emotivists. It happens that some people hold the more specific position called emotivism, and it's useful to have a word for it.
0RichardKennaway6yBecause most people cannot count any higher than one.

Again, we come to this issue of not having a precise definition of "right" and "wrong".

You're dodging the questions that I asked.

How do you determine which one is accurate? What observable consequences does each one predict? What do they lead you to anticipate

-3Peterdjones10yI am not dodging them. I am arguing that they are inappropriate to the domain, and that not all definitions have to work that way.
1CuSithBell10yBut you already have determined that one of them is accurate, right?
0NMJablonski10yAny belief you have about the nature of reality, that does not inform your anticipations in any way, is meaningless. It's like believing in a god which can never be discovered. Good for you, but if the universe will play out exactly the same as if it wasn't there, why should I care? Furthermore, why posit the existence of such a thing at all?
0[anonymous]10yOn a tangent - I think the subjectivist flavor of that is unfortunate. You're echoing Eliezer's Making Beliefs Pay Rent, but the anticipations that he's talking about are "anticipations of sensory experience". Ultimately, we are subject to natural selection, so maybe a more important rent to pay than anticipation of sensory experiences, is not being removed from the gene pool. So we might instead say, "any belief you have about the nature of reality, that does not improve your chances of survival in any way, is meaningless." Elsewhere, in his article on Newcomb's paradox, Eliezer says: Survival is ultimate victory.
0NMJablonski10yI don't generally disagree with anything you wrote. Perhaps we miscommunicated. I think that would depend on how one uses "meaningless" but I appreciate wholeheartedly the sentiment that a rational agent wins, with the caveat that winning can mean something very different for various agents.
-2Peterdjones10yMoral beliefs aren't beliefs about moral facts out there in reality, they are beliefs about what I should do next. "What should I do" is an orthogonal question to "what can I expect if I do X". Since I can reason morally, I am hardly positing anything without warrant.
3NMJablonski10yYou just bundled up the whole issue, shoved it inside the word "should" and acted like it had been resolved.
-2Peterdjones10yI have stated several times that the whole issue has not been resolved. All I'm doing at the moment is refuting your over-hasty generalisation that: "morality doesn't work like empirical prediction, so ditch the whole thing". It doesn't work like the empiricism you are used to because it is, in broad brush strokes, a different thing that solves a different problem.
3NMJablonski10yCan you recognize that from my position it doesn't work like the empiricism I'm used to because it's almost entirely nonsensical appeals to nothing, arguing by definitions, and the exercising of the blind muscles of eld philosophy? I am unpersuaded that there exists a set of correct preferences. You have, as far as I can see, made no effort to persuade me, but rather just repeatedly asserted that there are and asked me questions in terms that you refuse to define. I am not sure what you want from me in this case. Why should I accept your bald assertions here?
-2Peterdjones10yYou may be entirely of the opinion that it is all stuff and nonsense: I am only interested in what can be rationally argued. I don't think you think it works like empiricism. I think you have tried to make it work like empiricism and then given up. "I have a hammer in my hand, and it won't work on this 'screw' of yours, so you should discard it". People can and reason about what preferences they should have, and such reasoning can be as objective as mathematical reasoning, without the need for a special arena of objects.

I'm not really sure what a "mistake of rationality" is, or how it differs from simply being mistaken about something.

That said, I would agree with you that my Roman Catholic atheist friend is not arriving at his atheism in a particularly rational way.

WRT woojits, I'm not jumping to any conclusions: I arrived at that conclusion step-by-step. Again: "Having a well-defined notion of something is a prerequisite for belief in it; I don't have a well-defined notion of woojits; therefore I don't believe in woojits." You're free to disagree with any part of that or all of it, but I'd prefer you didn't simply ignore it.

[-][anonymous]10y 2

Do you believe in God? If I defended the notion of God in a similar way -- it is not straightforwardly empirical, it's inappropriate to demand concrete definitions, it's not under the domain of science, just because you can't define it and measure it doesn't mean it doesn't exist -- would you find that persuasive?

-2Peterdjones10yBut I am only defending the idea that morality means something. Atheists think "God" means something. "uncountable set" means something even if the idea is thoroughly non-concrete.
4[anonymous]10ySure, but few-to-no atheists would say something like "'God' means something, but exactly what is an open problem." The idea of someone refusing to say what they mean by "uncountable set" is even stranger.

... I'm not down-voting the comments I disagree with.

I down-voted a couple of snide comments from Peter earlier.

1Eugine_Nier10yWell, somebody is. If it's not you I'm sorry.
[-][anonymous]10y 2

5, 8, 9, and so on.

Just explain what you mean, already. Otherwise, I've got better things to do.

I understand English. Please proceed. (I can't speak for the other participants, but I infer that they understand English as well.)

Would you be willing to move this to the IRC?

[-][anonymous]10y 2

Not a good analogy. The objective element of 'wrong' is entirely different in nature to that of 'dangerous' even though by many definitions it does, in fact, exist.

The word "danger" illustrates a point about logic. The logical point is that the fact that X is often used to persuade people does not mean that the nature of X is that it is " a way of getting other people to do otherwise than what they wanted to do". The common use of the word "danger" is an illustration of this logical point. The illustration is correct.

-1[anonymous]10yThe objectivity of 'danger' is entirely different to that of 'wrong'. As such using it as an argument here is misleading and confused.
3NMJablonski10yUpvoted to both of you for an interesting discussion. It has reached the point it usually does in metaethics where I have to ask for someone to explain: What the hell does it mean for something to be objectively wrong? (This isn't targeted at you specifically wedrifid, it just isn't clear to me what the objectivity of "wrongness" could possibly refer to)
3Amanojack10yYeah, no one can ever seem to explain what "objectively wrong" would even mean. That's because to call an action wrong is to imply that there is a negative value placed on that action, and for that to be the case you need a valuer. Someone has to do the valuing. Maybe a large group of people - or maybe everyone - values the action negatively, but that is still nothing more than a bunch of individuals engaging in subjective valuation. It may be universal subjective valuation, or maybe they think it's God's subjective valuation, but if so it seems better to spell that out plainly than to obscure it with the authoritative- and scientific-sounding modifier objective.
0Peterdjones10yThe fact that something is done by a subject doesn't necessarily make it subjective. It takes a subject to add 2 and 2, but the answer is objective. There are many ideas as to what "objectively right" could mean. Two of Kant's famous suggestions are "act only on that maxim you would wish to be universal law" and "treat people always as ends and never as means".
0NMJablonski10yThis encapsulates my thoughts on metaethics entirely.
1[anonymous]10yA hard question. But I will try to give a brief answer. Morality is an aspect of social custom. Roughly, it is those customs that are enforced especially vigorously. But an important here is that while some customs are somewhat arbitrary and vary from place to place, other customs are much less arbitrary. It is these least arbitrary moral customs that we most commonly think of as universal morality applicable to and recognized by all humanity. Here's an example: go anywhere in the world as a tourist, and (in full view of a lot of typical people who are minding their own business, maybe traveling, maybe buying or selling, maybe chatting) push somebody in front of a train, killing them. Just a random person. See how people around you react. Recommendation: do this as a thought experiment, not an actual experiment. I'll tell you right now how people around the world will react: they'll be horrified, and they'll try to detain you or incapacitate you, possibly kill you. They will have a word in their language for what you just did, which will translate very well to the English word "murder". But why is this? Why aren't customs fully arbitrary? This puzzle, I think, is best understood if we think of society as a many-player game. That is, we apply the concepts of game theory to the problem. Custom is a Nash equilibrium. To follow custom is to act in accordance with your equilibrium strategy in this Nash equilibrium. Nash equilibria are not fully arbitrary - and this explains right away at least the general point that customs are not fully arbitrary. While not arbitrary, Nash equilibria are not necessarily unique, particularly since different societies exist in different environmental conditions, and so different societies can have different sets of customs. However, the customs of all societies around the world, or at least all societies with very few exceptions, share common elements. People across the world will be appalled if you kill someone arbitrarily. People ac
0Amanojack10yI don't know about the Nash equilibria, but I agree with most everything you've written here. I'd just prefer to call that (quasi-)universal subjective ethics, and to use language that reflects that, as there are exceptions - call them psychopaths or whatever, but in the interest of accuracy. And the other problem with the objectivist interpretation of custom is that sometimes customs do have to change, and sometimes customs are barbaric. It seems that what you were getting at with "actually wrong" in your initial post was the idea that these kind of moral sentiments are universal, which I can buy, but even that is a bit of a leaky generalization [http://lesswrong.com/lw/lc/leaky_generalizations/].
-2wedrifid10yPardon me. I deleted my comment before I noticed that someone had replied. (I didn't think replying to Constant was going to be beneficial. To be honest I didn't share your perception of interestingness of the conversation, even though I was a participant.) Very little practically speaking. It is a somewhat related concept to subjectively objective [http://lesswrong.com/lw/s6/probability_is_subjectively_objective/]. It doesn't make the value judgements any less subjective it is just that they happen to be built into the word definitions themselves. It doesn't make words like 'should' and 'wrong' any more useful when people with different values are arguing it just takes one of the meanings of 'should' as it is used practically and makes it explicit. I think the sophisticated name may be something related to moral cognitivism, probably with a 'realism' thrown in somewhere for good measure.
-1[anonymous]10yI am not comparing the objectivity of "danger" to the objectivity of "wrong". I am not stating or implying that their objectivity is the same or similar. I am using the word "danger" as an illustration of a point. The point is correct, and the illustration is correct. That "danger" has different objectivity from "wrong" is not relevant to the point I was illustrating.
-1wedrifid10yThere is an objective sense in which an analogy is good or bad, related closely to the concept of reference class tennis. Having one technical similarity does not make an analogy an appropriate one and certainly does not prevent it from being misleading. This example of 'for example' is objectively 'bad'.

Well, when you have something substantive and meaningful to point to let me know. I suggest tabooing words like "ethics", "morality", "should", etc. If you can give me a clear reductionist description of what you're talking about in metaethics without using those words, I'd love to hear it.

Sometimes one hears the term "moral realism," and in fact that term appears pretty often in your bibliography but not in the main text of your post. Would I be right to think that it comprises everything on the flowchart downstream of the answer "Yes" to the question "Are those beliefs about facts that are constituted by something other than human opinion?"?

1lukeprog10yThere are many definitions of moral realism. See here [http://wordsideasandthings.blogspot.com/2011/04/what-is-moral-realism.html]. But yes, your intuition here is roughly the one about the meaning of 'moral realism' shared by mainstream philosophers.

Flowchart is gone :|

If that's a definition of morality, then morality is a subset of psychology, which probably isn't what you wanted.

If that's a valid argument, then logic, mathematics, etc are branches of psychology.

There's a difference between changing your mind because a discussion lead you to bound your rationality differently, and changing your mind because of suggestability and other forms of sloppy thinking. Logic and mathematics is the former, if done right. I haven't seen much non-sloppy thinking on the subject of changing preferences.

I suppose there could... (read more)

-3Peterdjones10yBut there are other stories where the preference itself changes. "If you approve of womens rights, you should approve of Gay rights". Everything is a mixture of the invalid and the valid. Why throw somethin out instead of doing it better?
5TimFreeman10yIMO we should have gay rights because gays want them, not because moral suasion was used successfully on people opposed to gay rights. Even if your argument above worked, I can't envision a plausible reasoning system in which the argument is valid. Can you offer one? Otherwise, it only worked because the listener was confused, and we're back to morality being a special case of psychology again. Because I don't know how to do moral arguments better. So far as I can tell, they always seems to wind up either being wrong, or not being moral arguments.

So what matters then is if all dictionaries have it? Why does that matter? Does this mean we couldn't have this discussion before dictionaries were invented? Did the nature of morality change with the invention of a dictionary? Moreover, if one got every dictionary to include "boojum" and "snark" would that then make it different?

-2Peterdjones10yIf a word is defined in all dictionaries, then the claim that it is completely meaningless is extraordinary and poorly motivated. Dictionaries are of course only significant because they make usage concrete.
2JoshuaZ10yThe claim was about incoherence not whether it was "completely meaningless" and I fail to see how motivation is either relevant or you get anything about a claim being poorly motivated from this. If you prefer a different analogy, consider such terms as transubstantiation, consubstantiation, homoousion, hypostatic union, kerygma and modalism. Similarly, in a Hebrew dictionary you will have all ten Sephirot defined (Keter, chochmah, etc.). Is it is extraordinary and poorly motivated to say that these kabbalistic terms are incoherent?
-3Peterdjones10yThe point about motivation is about where burdens lie. The discussion so far has been about the accusation that somebody somewhere is culpably refusing to define "morality". This is the first mention of incoherence. "incoherent" is often used as a loose synonym for "I don't like it". That is not a useful form of argument. The examples of "incoherent" concepts you gave are a mixed bag of concepts ranging from the well defined but false, to the well defined but ungrounded, to the ill defined. If you want to say what specific kind of incoherence "morality" has IYO, feel free.
2JoshuaZ10yHow are motivations relevant to where burdens lie? Really? So, what about here [http://lesswrong.com/lw/5eh/what_is_metaethics/41ou]? You seem confused about what argument CuSithBell is arguing. The argument is not that morality is fundamentally incoherent or meaningless but that most definitions of it fall into those categories and that our common intuition is not sufficient to have useful discussions about it, so you need to supply a definition for what you mean. So far, you seem to have refused to do that. Do you see the distinction?

How is that relevant? I don't see why the presence in a dictionary matters. But even if it did, boojum is in some dictionaries and encyclopedia too. It is a type of snark.

Replace woojit then with boojum and the point still goes through.

Which is true, and explains why it is a harder problem than physics, and less progress has been made.

I'm not sure I accept either of those claims, explanation or no.

I'd like to hear more on this charter for pseudo-solutions. What's wrong with the mainstream LW picture? By private message or in this thread (if it's not too tangential) or in a new discussion thread.

??? I'm just trying to understand what your definition of morality is.

I think you will find my thoughts on this matter are relatively common in this community.

You need to show that there are no categorical rights and wrongs.

I don't need to do that if I don't want to do that. If you want me to act according to categorical rights and wrongs then you need to show me that they exist.

[-][anonymous]10y 1

The material on reductionism on this site seems to me a charter for coming up with pseudo-solutions that just sweep the problems under the rug.

Could you explain this further?

I've read Nietzsche, and I'm an ethicist of sorts, and I think Nietzsche is not a prerequisite for understanding either normative ethics or metaethics.

Watching this series with interest. I liked the taboo thing in the first post; reminds me of my favorite Hume quote:

'tis usual for men to use words for ideas, and to talk instead of thinking in their reasonings.

Tangent: I think Ayer's observation was correct but he had the implication backwards. The English sentence "Yuck!" contains the assertion "That is bad." and is truth-apt.

I have launched into arguments with people after they expressed distaste, and I think it was at least properly grammatical. A start: "What's yucky about that?"

4Scott Alexander10yWhen I was in Thailand, I saw some local tribesmen eat a popular snack of giant beetles. I said "Yuck!" and couldn't watch them. However, I recognize that there's nothing weirder about eating a bug than about eating a chicken and that they're perfectly healthy and nutritious to people who haven't been raised to fear eating them.
0Amanojack10yTo interpret "Yuck!" as "That is bad/yucky" is to turn what is ostensibly an expression of subjective experience into an ostensibly "objective" statement. You may as well keep it subjective and interpret it as "I am experiencing revulsion." But you'd have to be a pretty cunning arguer to get into a debate about whether another person is really having a subjective experience of revulsion!
1thomblake10yIt's both - expressing revulsion has a normative component, and so does even experiencing revulsion. To illustrate: If I eat something and exclaim, "Oishii!", that not only expresses that I am "experiencing deliciousness", but also that the thing I'm tasting "is delicious" - my wife can try it out with the expectation that when she eats it she will also "experience deliciousness". It is a good-tasting thing.
4Amanojack10yIt still sounds just like two people experiencing subjective deliciousness. What if a third person, or a dog, or Clippy, finds it not so delicious?

This is not quite correct. The error theorist can hold that a statement like "Murder is not wrong" is true, for they think that murder is not wrong or right.

Should that be "The error theorist can't hold that a statement like 'Murder is not wrong' is true"?

(Also, it's not clear to me that classifying error theory as cognitivist is correct. If it claims that all moral statements are based on a fundamentally mistaken intuition, so that "Murder is wrong" has no more factual content than "Murder is flibberty", then is ... (read more)

3Alicorn10yNo. The error theorist may hold "murder is not wrong" and "murder is not right" to be true. Ey just has to hold "murder is wrong" and "murder is right" to be false, and if ey wants to endorse the "not" statements I guess a rule that "things don't have to be either right or wrong" must operate in the background.
1lukeprog10yata, In this case, I managed to say it correctly the first time. :) If you're not sure about this stuff, you can read the first chapter of Joyce's 'The Myth of Morality', the central statement of contemporary error theory.
5ata10yI can see how an error theorist would agree with "Murder is not wrong" in the same sense in which I'd agree with "Murder is not purple", but it's a strange and not very useful sense. My impression had been that error theorists claim that there are no "right" or "wrong" buckets to sort things into in the first place, rather than proposing that both buckets are there but empty — more like ignosticism than atheism. Am I mistaken about that?
9Larks10yError theorists believe that when people say "Murder is wrong", those people are actually trying to claim that it is a fact that murder has the property of being wrong. However, those people are incorrect (error theorists think) because Murder does not have the property of being wrong - because nothing has the property of being wrong. It's not about whether or not there are buckets - error theory just says that most people think there is stuff in buckets, but they're incorrect.
5prase10yI smell a peculiar odour of inconsistency. (That means, add some modifier, as "morally", to the second "wrong", else it sounds really weird.)

Funny how you never quite answer the question as stated. Can you even say it is subjectively wrong?

It isn't 'funny' at all. You were trying to force someone into a lose lose morality signalling position. It is appropriate to ignore such attempts and instead state what your actual position is.

Your gambit here verges on logically rude.

Can you give some of those? I'd be curious what such a list would look like.

1Peterdjones10yeg., Murder, stealing
3NMJablonski10ySo what makes an intuition a core intuition and how did you determine that your intuitions about murder and stealing are core?
3JoshuaZ10yThat's a pretty short list.

you should not conclude that woojits don't exist because you don't know what they are


[-][anonymous]10y 0

I believe in consciousness, but don't have a well defined notion of it.

This doesn't strike you as being a problem?

Yes, but we've already determined that we don't disagree - unless you think we still do? I was arguing against observing objective (i.e. externally existing) morality. I suspect that you disagree more with Eugine_Nier.

[-][anonymous]10y 0

I probably don't understand what you mean.

I think that it's easy to be an atheist -- i.e. one doesn't have to make any difficult definitions or arguments to arrive at atheism, and those easy definitions and arguments are correct. If you think it's harder than I do, that would be interesting and could explain why we have such different opinions here.

-1Peterdjones10yFine. Then the atheist who doesn't have a difficult definition of God, isn't culpably refusing to explain her "new idea", and someone who thinks there is something to be said about morality can stick with the vanilla definition that morality is Right and Wrong and Such.
[-][anonymous]10y 0

You continue to misrepresent my position.

Alright sport. If you're unwilling to explain, you can go on being an amateur ethicist and I'll resume my policy of ignoring the field until something interesting happens.

How neat is the dichotomy between cognitivists and non-cognitivists? Are there significant philosophical factions holding positions such as

  • "Murder is wrong" is a statement of belief, but it also expresses an emotion (and morality's peculiar charm comes from appealing both to a person's analytical mind and to their instincts)
  • Some people approach morality as a system of beliefs, others as gut reactions, and this is connected to their personalities in interesting ways
  • Or perhaps the same person can shift over time from gut reactions to believing
... (read more)

I'm wondering whether emotive responses lack logical content, and also whether belief-based morality requires emotive backing (failure of utilitarianism--yuck!) to move people to action.

[-][anonymous]10y 0

Off-Topic: At least for me, your text feels like it is "cut off" -- it does not seem to have a closure -- like a classical solo concert which is stopped after the final cadence of the soloist, before the orchestra sets in again. Is this intended?

One major debate in moral psychology concerns whether moral judgments require some (defeasible) motivation to adhere to the moral judgment (motivational internalism), or whether one can make a moral judgment without being motivated to adhere to it (motivational externalism).

One of the first two "moral judgements" in this confusing sentence is probably a typo. "Defeasible" just makes things more confusing. Maybe follow the vein of your linked Wikipedia paragraph more closely?

Our moral judgments are greatly affected by pointing magnets at the point in our brain that processes theory of mind.

The way this is worded makes it seem that the result is produced by static magnetic fields. And that makes it sound like 19th century pseudo-science.

We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from our older 'chimp' brains.

And the way this is worded makes it seem that you think that the neo-cortex is something that evolved since we separated from the chimps.

Moral nat

... (read more)
1lukeprog10yI was trying to make use of Greene's phrase: 'inner chimp.' But you're right; it's not that accurate. I've adjusted the wording above.
3Perplexed10yI don't think it is Greene's phrase. I spent some time searching, and can find only one place where he used it - a 2007 RadioLab interview with Krulwich [http://www.radiolab.org/2007/aug/13/chimp-fights-and-trolley-rides/]. I would be willing to bet that he was primed to use that phrase by the journalist. He doesn't even use the word chimp in the cited paper [http://www.fed.cuhk.edu.hk/~lchang/material/Evolutionary/Developmental/Greene-KantSoul.pdf] . In any case, Greene's arguments are incoherent even by the usual lax standards of evolutionary psychology and consequentialist naturalistic ethics. He suggests that a consequentialist foundation for ethics is superior to a deontological foundation because 'consequentialist moral intuitions' flow from a more recently evolved portion of the brain. Now it should be obvious that one cannot jump from 'more recently evolved' to 'superior as a moral basis'. You can't even get from 'more recently evolved' to 'more characteristically human'. Maybe you can get to 'more idiosyncratically human'. But even that only helps if you are comparing moral judgements on which deontologists and consequentialists differ. But Greene does not do that. Instead of comparing two different judgements about the same situation, he discusses two different situations, in both of which pretty-much everyone's moral intuitions agree. He calls the intuitions that everyone has in one situation 'consequentialist' and the intuitions in the other situation 'deontological'! Now, most people would object that deontology has nothing to do with intuition. Greene has an answer: And so, having completely restructured the playing field, he reaches the following conclusions: Let me get this straight. The portions of our brains that generate what Green dubs 'deontological intuitions' are evolutionarily ancient, present in all animals. So Greene dismisses those intuitions as "morally irrelevant" since they ultimately arise from "factors having to do with the con
6lukeprog10yI remember Greene's position being more nuanced than that, but it's been a while since I read his dissertation. In any case, I'm not defending his view. I only claimed that (in its revised wording) "We use our recently-evolved neocortex to make utilitarian judgments, and deontological judgments tend to come from evolutionarily older parts of our brains."
4AlephNeil10yThat's obvious to my prefrontal cortex, but my inner chimp finds the idea desperately appealing.
-1Peterdjones10yThat's a distinction that makes sense if deontology is hardwired whilst consequentialism varies with evidence.