Below is a sketch of a moral anti-realist position based on the map-territory distinction, Hume and studies of psychopaths. Hopefully it is productive.

The Map is Not the Territory Reviewed

Consider the founding metaphor of Less Wrong: the map-territory distinction. Beliefs are to reality as maps are to territory. As the wiki says:

Since our predictions don't always come true, we need different words to describe the thingy that generates our predictions and the thingy that generates our experimental results. The first thingy is called "belief", the second thingy "reality".

Of course the map is not the territory.

Here is Albert Einstein making much the same analogy:

Physical concepts are free creations of the human mind and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and simpler and will explain a wider and wider range of his sensuous impressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this ideal limit the objective truth.

The above notions about beliefs involve pictorial analogs, but we can also imagine other ways the same information could be contained. If the ideal map is turned into a series of sentences we can define a 'fact' as any sentence in the ideal map (IM). The moral realist position can then be stated as follows:

Moral Realism: ∃x(x ⊂ IM) & (x = M)

In English: there is some set of sentences x such that all the sentences are part of the ideal map and x provides a complete account of morality.

Moral anti-realism simply negates the above.  ¬(∃x(x ⊂ IM) & (x = M)).

Now it might seem that, as long as our concept of morality doesn't require the existence of entities like non-natural gods, which don't appear to figure into an ideal map, moral realism must be true (where else but the territory could morality be?). The problem of ethics then, is chiefly one of finding a satisfactory reduction of moral language into sentences we are confident of finding in the IM. Moreover, the 'folk' meta-ethics certainly seems to be a realist one. People routinely use moral predicates and speak of having moral beliefs. "Stealing that money was wrong", "I believe abortion is immoral", "Hitler was a bad person". In other words, in the maps people *actually have right now*  a moral code seems to exist.



Beliefs vs. Preferences

But we don't think talking about belief networks is sufficient for modeling an agent's behavior. To predict what other agents will do we need to know both their beliefs and their preferences (or call them goals, desires, affect or utility function). And when we're making our own choices we don't think we're responding merely to beliefs about the external world. Rather, it seems like we're also responding to an internal algorithm that helps us decide between actions according to various criteria, many of which reference the external world.

The distinction between belief function and utility function shouldn't be new to anyone here. I bring it up because the queer thing about moral statements is that they seem to be self-motivating. They're not merely descriptive, they're prescriptive. So we have a good reason to think that they call our utility function. One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.

Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears to be, or at least involve, emotion.

Projectivism and Psychopathy

That our brains might routinely turn expressions of our utility function into properties of the external world shouldn't be surprising. This was essentially Hume's position. From the Stanford Encyclopedia of Philosophy.

Projectivism is best thought of as a causal account of moral experience. Consider a straightforward, observation-based moral judgment: Jane sees two youths hurting a cat and thinks “That is impermissible.” The causal story begins with a real event in the world: two youth performing actions, a suffering cat, etc. Then there is Jane's sensory perception of this event (she sees the youths, hears the cat's howls, etc.). Jane may form certain inferential beliefs concerning, say, the youths' intentions, the cats' pain, etc. All this prompts in Jane an emotion: She disapproves (say). She then “projects” this emotion onto her experience of the world, which results in her judging the action to be impermissible. In David Hume's words: “taste [as opposed to reason] has a productive faculty, and gilding and staining all natural objects with the colours, borrowed from internal sentiment, raises in a manner a new creation” (Hume [1751] 1983: 88). Here, impermissibility is the “new creation.” This is not to say that Jane “sees” the action to instantiate impermissibility in the same way as she sees the cat to instantiate brownness; but she judges the world to contain a certain quality, and her doing so is not the product of her tracking a real feature of the world, but is, rather, prompted by an emotional experience.

This account has a surface plausibility. Moreover, it has substantial support in psychological literature. In particular, the behavior of psychopaths closely matches what we would expect if the projectivist thesis were true. The distinctive neurobiological feature of psychopathy is impaired function of the amygdala. The amygdala mainly associated with emotional processing and memory. Obviously, as a group psychopaths tend toward moral deficiency. But more importantly psychopaths fail to make the normal human distinction between morality and convention. Thus a plausible account of a moral judgment is that it requires both social convention and emotional reaction. See the work of Shaun Nichols, in particular this for an extended discussion of the implications of psychopathy on metaethics and his book for a broader, empirically informed account of sentimentalist morality. Auditory learners might benefit from this bloggingheads he did.

If the projectivist account is right the difference between non-cognitivism and error theory is essentially one of emphasis. If you want to call moral judgments beliefs based on the above account then you are an error theorist. If you think they're a kind of pseudo-belief then you're a non-cognitivist.

But utility functions are part of the territory described by the map!

Modeling reality has a recursive element which tends to generate considerable confusion over multiple domains. The issue is that somewhere in any good map of the territory will be a description of the agent doing the mapping. So agents end up with beliefs about what they believe and beliefs about what they desire. Thus, we might think there could be a set of sentences in IM that make up our morality so long as some of those sentences describe our utility function. That is, the motivational aspect of morality can be accounted for by including in the reduction both a) a sentence which describes what conditions are to be preferred to others and b) a statement which says that the agent prefers such conditions. 

The problem is, our morality doesn't seem completely responsive to hypothetical and counter-factual shifts in what our utility function is. That is, *if* I thought causing suffering in others was something I should do and I got good feelings from doing it that  *wouldn't* make causing suffering moral (though Sadist Jack might think it was). In other words, changing one's morality function isn't a way to change what is moral (perhaps this judgment is uncommon, we should test it).

 This does not mean the morality subroutine of your utility function isn't responsive to changes in other parts of the utility function. If you think fulfilling your own non-moral desires is a moral good then which actions are moral will depend on how your non-moral desires change. But hypothetical changes in our morality subroutine don't change our moral judgments about our actions in the hypothetical. This is because when we make moral judgments we *don't* look at our map of the world to find our what our morality says, rather we have an emotional reaction to a set of facts and that emotional reaction generates the moral belief. Below is a diagram that somewhat messily describes what I'm talking about.


On the left we have the external world which generates the sensory inputs our agent uses to form beliefs. Those beliefs are then input into the utility function, a subroutine of which is morality. The utility function outputs the action the agent chooses. On the right we have zoomed in on the green Map circle from the left. Here we see that the map includes moral 'beliefs' (note that this isn't an ideal map) which have been projected from the morality subroutine in the utility function. Then we have, also within the Map, the self-representation of the agent which in turn includes her algorithms and mental states. Note that altering morality of the self-representation won't change the output of the morality subroutine of the first level of the model. Of course, in an ideal map the self-representation would match the first level but that doesn't change the causal or phenomenal story of how moral judgments are made.

Observe how easy it is to make category errors if this model is accurate. Since we're projecting our moral subroutine onto our map and we're depicting ourselves in the map it is very easy to think that morality is something we're learning about from the external world (if not from sensory input then from a priori reflection!). Of course, morality is in the external world in a meaningful sense since our brains are in the external world. But learning what is in our brains is not motivating in the way moral judgments are supposed to be. This diagram explains why: the facts about our moral code in our self-representation are not directly connected to our choice circuits which cause us to perform actions. Simply stating what our brains are like will not activate our utility function and so the expressive content of moral language will be left out. This is Hume's is-ought distinction- 'ought' sentences can't be derived from 'is' sentences because ought sentences involve the activation of the utility function at the first level of the diagram, whereas 'is' sentences are exclusively part of the map.

And of course since agents can have different morality functions there are no universally compelling arguments.

The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.

New Comment
136 comments, sorted by Click to highlight new comments since: Today at 3:28 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The post heavily relies on moral internalism without arguing for it. Internalism holds that a necessary connection exists between sincere moral judgment and motivation. As the post says, "moral statements [...] seem to be self-motivating." I've never seen a deeply plausible argument for internalism, and I'm pretty sure it's false. The ability of many psychopaths to use moral language in a normal way, and in some cases to agree that they've done evil and assert that they just don't care, would seem to refute it.

Upvoted for giving a clear statement of an anti-realist view.

As the study I link to in the post points out, even though psychopaths often make accurate moral judgments they don't seem to understand the difference between morality and convention. It seems like they can agree they've done evil and assert that they don't care- but thats because they're using evil to mean "against convention" and not what we mean by it. You're right that it's a weaker point of the post, though. Didn't really have room or time to say everything. Just to start: imagine a collection of minds without any moral motivation. How would they learn what is moral? (What we do is closely examine the contours of what we are motivated to do, right?)
Psychopaths, or at least convicted criminals (the likely target of research), may lack the distinction between moral and conventional. But there are brain-damage-induced cases of sociopathy in which individuals can still make that distinction (page 2 of the link). These patients with ventromedial frontal brain damage retain their moral reasoning abilities and beliefs but lose their moral motivation. So, I don't think even the claim that moral judgments necessarily carry some motivational force is true. @lessdazed: nice point.
Great article, really exciting to read because this: Is exactly the kind of thing projectionism expects us to find. You need an intact VM cortex to develop moral beliefs in the first place. Once your emotional responses are projected into beliefs about the external world you can loose the emotional response through VM cortex damage but retain the beliefs without the motivation.
I agree, projectivism strongly predicts that emotional faculties will be vital to moral development. But most cognitivist approaches would also predict that the emotional brain has a large role to play. For example, consider this part of the article: People who can't tell whether others are suffering or prospering are going to be seriously impaired in moral learning, on almost any philosophical ethical view.
Sure. But, to tie it back to what we were discussing before, that internalism is false when it comes to moral beliefs is not evidence against a projectivist and non-cognitivist thesis. As a tentative aside-- I'm not sure whether or not internalism is a necessary part of the anti-realist position. It seems conceivable that there could be preferences, desires or emotive dispositions that aren't motivating at all. It certainly seems psychologically implausible- but it doesn't follow that it is impossible. Someone should do a series of qualitative interviews with VM cortex impaired patients. I'd like to know things like what "ought" means to them.
In a Bayesian sense, the falsity of internalism tends to weaken the case for projectivism and non-cognitivism, by taking away an otherwise promising line of support for them. Mackie's argument from queerness relies upon it, for example.
Mackie conflates two aspects of queerness- motivation and direction, the latter of which remains even if motivational internalism is false. Second, that motivation can be detached from moral judgment in impaired brains doesn't mean that moral facts don't have a queer associate with motivation.
If we all agree that some different moral statements are motivating in different amounts, the burden of proof is on the one who says that a certain amount of motivation is impossible. E.g. The belief "It would be nice to help a friend by helping carry their couch up the stairs to their apartment" makes me feel mildly inclined to help. The belief "It would be really nice to give the homeless guy who asked for food a sandwich" makes me significantly inclined to help. Why would it be impossible for me to believe "It would be nice to help my friend with his diet when he visits me" and feel nothing at all?

One way of phrasing a moral non-cognitivist position is to say that moral statements are properly thought of as expressions of an individual's utility function rather than sentences describing the world.

Note that 'expressions of an individual's utility function' is not the same as 'sentences describing an individual's utility function'. The latter is something like 'I prefer chocolate to vanilla' the former is something like 'Mmmm chocolate!'. It's how the utility function feels from the inside. And the way a utility function feels from the inside appears

... (read more)
I'm skeptical that moral deliberation actually overrides emotions directly. It seems more likely that it changes the beliefs that are input into the utility function (thought often in a subtle way). Obviously this can lead the expression of different emotions. Second, we should be extremely skeptical when we think we've reasoned from one position to another even when the facts haven't changed. If our morality function just changed for an internal reason, say hormone level, it seems very characteristic of humans to invent a rationalization for the change. Can you give a particular example of a moral deliberation that you think is a candidate? This seems like it would be easier to discuss on an object level.
3Wei Dai13y
Take someone who talks (or reads) themselves into utilitarianism or egoism. This seems to have real consequences on their actions, for example: Presumably, when that writer "converted" to utilitarianism, the positive emotions of "rescuing lost puppies" or "personally volunteering" did not go away, but he chose to override those emotions. (Or if they did go away, that's a result of converting to utilitarianism, not the cause.) I don't think changes in hormone level could explain "converting" to utilitarianism or egoism, but I do leave open the more general possibility that all moral changes are essentially "internal". If someone could conclusively show that, I think the anti-realist position would be much stronger.
So a couple points: First, I'm reluctant to use Less Wrong posters as a primary data set because Less Wrong posters are far from neurotypical. A lot of hypotheses about autism involve... wait for it... amygdala abnormality. Second, I think it is very rare for people to change their behavior when they adopt a new normative theory. Note that all the more powerful arguments in normative theory involve thought experiments designed to evoke an emotional response. People usually adopt a normative theory because it does a good job explaining the emotional intuitions they already possess. Third, a realist account of changing moral beliefs is really metaphysically strange. Does anyone think we should be updating P(utilitarianism) based on evidence we gather? What would that evidence look like? If an anti-realist metaphysics gives us a natural account of what is really happening when we think we're responding to moral arguments then shouldn't anti-realism be the most plausible candidate?
5Wei Dai13y
This part of a previous reply to Richard Chappell seems relevant here also: In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me? If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer? I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?
What the anti-realist theory of moral change says is that terminal values don't change in response to reasons or evidence. So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values. I wonder if you, or anyone else can give me some examples reasons for changing one's normative theory. I suspect that most if not all such reasons which actually lead to a behavior change will either involve evoking emotion or updating the map (i.e. something like your normative theory ignores this class of suffering or something like that). Good question that I could probably turn into a full post. Anti-realism doesn't get rid of normative ethics exactly, it just redefines what we mean by it. We're not looking for some theory that describes a set of facts about the world. Rather, we're trying to describe the moral subroutine in our utility function. In a sense, it deflates the normative project into something a lot like coherent extrapolated volition. Of course, anti-realism also constrains what methods we should expect to be successful in normative theory and what kinds of features we should expect an ideal normative theory to have. For example, since the morality function is a biological and cultural creation we shouldn't be surprised to find out that it is weirdly context dependent, kludgey or contradictory. We should also expect to uncover natural variance between utility functions. Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning. I actually think an approach similar to the one in this post might clarify the mathematics question (I think mathematics could be thought of as a set of meta-truths about our map and the language we use to draw the map). In any case, it seems obvious to me that the
6Wei Dai13y
In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration? What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then? The problem is, we do not have a utility function. If we want one, we have to construct it, which inevitably involves lots of "deliberative thinking". If the deliberative thinking module gets to have lots of say anyway, why can't it override the intuitive/emotional modules completely? Why does it have to take its cues from the emotional side, and merely "rationalize"? Or do you think it doesn't have to, but it should? Unfortunately, I don't see how descriptive moral psychology can help me to answer the above questions. Do you? Or does anti-realism offer any other ideas?
What counts as a virtue in any model depends on what you're using that model for. If you're chiefly concerned with accuracy then you want your normative theory to fit your values as much as possible. But maybe the most accurate model takes to long to run on your hardware- in that case you might prefer a simpler, more elegant model. Maybe there are hard limits to how accurate we can make such models and will be willing to settle for good enough. Whatever our best ontology is it will always have some loose analog in our evolved, folk ontology. So we should try our best to to make it fit. There will always be weird edge cases that arise as our ontology improves and our circumstances diverge from our ancestor's i.e. "are fetuses in the class of things we should have empathy for?" Expecting evolution to have encoded an elegant set of principles in the true ontology is obviously crazy. There isn't much one can do about it if you want to preserve your values. You could decide that you care more about obeying a simple, elegant moral code than you do about your moral intuition/emotional response (perhaps because you have a weak or abnormal emotional response to begin with). Whether you should do one or the other is just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition. But I think realizing that we aren't talking about facts but trying to describe what we value makes elegance and simplicity seem less important.
6Wei Dai13y
I dispute the assumption that my emotions represent my values. Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed). According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right? Alternatively, what if I think the above may be something I should do, but I'm not sure? Does anti-realism offer any help besides that it's "just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition"? A superintelligent moral psychologist might tell me that there is one text file, which if I were to read it, would cause me to do what I described earlier, and another text file which would cause me to to choose to rationalize my emotions instead, and therefore I can't really be said to have an intrinsic psychological disposition in this matter. What does anti-realism say is my morality in that case?
Me too. There are people who consistently judge that their morality has "too little" motivational force, and there are people who perceive their morality to have "too much" motivational force. And there are people who deem themselves under-motivated by certain moral ideals and over-motivated by others. None of these would seem possible if moral beliefs simply echoed (projected) emotion. (One could, of course, object to one's past or anticipated future motivation, but not one's present; nor could the long-term averages disagree.)
See "weak internalism". There can still be competing motivational forces and non-moral emotions.
First, this scenario is just impossible. One cannot dis-identify from one's 'emotional side'. Thats not a thing. If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant). Second, it seems silly, even dumb, to give up on making moral judgments and become a nihilist just because you'd like there be a way to determine moral principles from abstract reasoning alone. Most people are attached to their morality and would like to go on making judgments. If someone has such a strong psychological need to derive morality through abstract reasoning along that they're just going to give up morality: so be it I guess. But that would be a very not-normal person and not at all the kind of person I would want to have programming an FAI. But yes- ultimately my values enter into it and my values may not be everyone else's. So of course there is no fact of the matter about the "right" way to do something. Nevertheless, there are still no moral facts. You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers. I'm telling you what morality is. To me, the answer has some implications for FAI but anti-realism certainly doesn't answer questions that it says there aren't answers to.
6Wei Dai13y
In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them? That doesn't stop me from trying. In fact moral psychology could be a great help in preventing such "contamination". If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong. (If I actually choose to make that assumption, then I must have a psychological disposition to make that assumption. So anti-realism would say that whatever normative theory I form under that assumption is my actual morality. Right?) Can you answer the last question in the grandparent comment, which was asking just this sort of question?
That's true as stated, but "not being wrong" isn't the only thing you care about. According to your current morality, those questions have moral answers, and you shouldn't answer them any way you want, because that could be evil.
5Wei Dai13y
When you say "you shouldn't answer them any way you want" are you merely expressing an emotional dissatisfaction, like Jack? If it's meant to be more than an expression of emotional dissatisfaction, I guess "should" means "what my current morality recommends" and "evil" means "against my current morality", but what do you mean by "current morality"? As far as I can tell, according to anti-realism, my current morality is whatever morality I have the psychological disposition to construct. So if I have the psychological disposition to construct it using my intellect alone (or any other way), how, according to anti-realism, could that be evil?
By "current morality" I mean that the current version of you may dislike some outcomes of your future moral deliberations if Omega shows them to you in advance. It's quite possible that you have a psychological disposition to eventually construct a moral system that the current version of you will find abhorrent. For an extreme test case, imagine that your long-term "psychological dispositions" are actually coming from a random number generator; that doesn't mean you cannot make any moral judgments today.
2Wei Dai13y
I agree it's quite possible. Suppose I do somehow find out that the current version of me emotionally dislikes the outcomes of my future moral deliberations. I still have to figure out what to do about that. Is there a normative fact about what I should do in that case? Or is there only a psychological disposition?
I think there's only a psychological disposition. If the future of your morals looked abhorrent enough to you, I guess you'd consider it moral to steer toward a different future. Ultimately we seem to be arguing about the meaning of the word "morality" inside your head. Why should that concept obey any simple laws, given that it's influenced by so many random factors inside and outside your head? Isn't that like trying to extrapolate the eternally true meaning of the word "paperclip" based on your visual recognition algorithms, which can also crash on hostile input? I appreciate your desire to find some math that could help answer moral questions that seem too difficult for our current morals. But I don't see how that's possible, because our current morals are very messy and don't seem to have any nice invariants.
3Wei Dai13y
Every concept is influenced by many random factors inside and outside my head, which does not rule out that some concepts can be simple. I've already given one possible way in which that concept can be simple: someone might be a strong deliberative thinker and decide to not base his morality on his emotions or other "random factors" unless he can determine that there's a normative fact that he should do so. Emotions are just emotions. They do not bind us, like a utility function binds an EU maximizer. We're free to pick a morality that is not based on our emotions. If we do have a utility function, it's one that we can't see at this point, and I see no strong reason to conclude that it must be complex. How do we know it's not more like trying to extrapolate the eternally true meaning of the word "triangle"? Thinking that humans have a "current morality" seems similar to a mistake that I was on the verge of making before, of thinking that humans have a "current decision theory" and therefore we can solve the FAI decision theory problem by finding out what our current decision theory is, and determining what it says we should program the FAI with. But in actuality, we don't have a current decision theory. Our "native" decision making mechanisms (the ones described in Luke's tutorial) can be overridden by our intellect, and no "current decision theory" governs that part of our brains. (A CDT theorist can be convinced to give up CDT, and not just for XDT, i.e., what a CDT agent would actually self-modify into.) So we have to solve that problem with "philosophy" and I think the situation with morality may be similar, since there is no apparent "current morality" that governs our intellect.
Even without going into the complexities of human minds: do you mean triangle in formal Euclidean geometry, or triangle in the actual spacetime we're living in? The latter concept can become arbitrarily complex as we discover new physics, and the former one is an approximation that's simple because it was selected for simplicity (being easy to use in measuring plots of land and such). Why you expect the situation to be different for "morality"?
I'm not sure I actually understand what you mean by "dis-identify". So Pascal's Wager? In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things. Which question exactly?
Then there is fact of the matter about which answers are moral, and we might as well call those that aren't, "incorrect".
It seems like a waste to overload the meaning of the word "incorrect" to also include such things as "Fuck off! That doesn't satisfy socially oriented aspects of my preferences. I wish to enforce different norms!" It really is useful to emphasize a carve in reality between 'false' and 'evil/bad/immoral'. Humans are notoriously bad at keeping the concepts distinct in their minds and allowing 'incorrect' (and related words) to be used for normative claims encourages even more motivated confusion.
No. Moral properties don't exist. What I'm doing, per the post, when I say "There are immoral answers" is expressing an emotional dissatisfaction to certain answers.
Autism gets way over-emphasized here and elsewhere as a catch-all diagnosis for mental oddity. Schizotypality and obsessive-compulsive spectrum conditions are just as common near the far right of the rationalist ability curve. (Both of those are also associated with lots of pertinent abnormalities of the insula, anterior cingulate cortex, dorsolateral prefrontal cortex, et cetera. However I've found that fMRI studies tend to be relatively meaningless and shouldn't be taken too seriously; it's not uncommon for them to contradict each other despite high claimed confidence.) I'm someone who "talks (or reads) myself into" new moral positions pretty regularly and thus could possibly be considered an interesting case study. I got an fMRI done recently and can probably persuade the researchers to give me a summary of their subsequent analysis. My brain registered absolutely no visible change during the two hours of various tasks I did while in the fMRI (though you could see my eyes moving around so it was clearly working); the guy sounded somewhat surprised at this but said that things would show up once the data gets sent to the lab for analysis. I wonder if that's common. (At the time I thought, "maybe that's because I always feel like I'm being subjected to annoying trivial tests of my ability to jump through pointless hoops" but besides sounding cool that's probably not accurate.) Anyway, point is, I don't yet know what they found. (I'm not sure I'll ever be able to substantiate the following claim except by some day citing people who agree with me, 'cuz it's an awkward subject politically, but: I think the evidence clearly shows that strong aneurotypicality is necessary but not sufficient for being a strong rationalist. The more off-kilter your mind is the more likely you are to just be crazy, but the more likely you are to be a top tier rationalist, up to the point where the numbers get rarer than one per billion. There are only so many OCD-schizotypal IQ>160 folk.
Can you talk about about some of the arguments that lead you to taking new moral positions? Obviously I'm not interested in cases where new facts changed how you thought ethics should be applied but cases where your 'terminal values' changed in response to something.
That's difficult because I don't really believe in 'terminal values', so everything looks like "new facts" that change how my "ethics" should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your 'terminal values' enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my 'terminal values' is 'thinking things through from first principles', but once you're that abstract and that meta it's unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like 'going meta' or 'justification for values must be even better supported than justification for beliefs' or 'arbitrariness is bad'. So it's not obvious at which level of abstraction I should answer your question. Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person's paradigm shift, it gets hard to communicate in a natural way. Like, for the 'levels of organization' flavor of levels of abstraction, the difference between "I love Jane more than any other woman and would trade the world for her" and "I love humanity more than other memeplex instantiation and would trade the multiverse for it". It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it's like they'd be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big. To be honest I think that though efforts like this post are well-in
Like, there's a point at which object level uncertainty looks like "should I act as if I am being judged by agents with imperfect knowledge of the context of my decisions or should I act as if I am being judged by an omniscient agent or should I act as if I need to appease both simultaneously or ..."; you can go meta here in the abstract to answer this object level moral problem, but one of my many points is that at this point it just looks nothing like 'is killing good or bad?' or 'should I choose for the Nazis kill my son, or my daughter (considering they've forced this choice upon me)?'.
I remember that when I was like 11 years old I used to lie awake at night obsessing about variations on Sophie's choice problems. Those memories are significantly more vivid than my memories of living off ramen and potatoes with no electricity for a few months at around the same age. (I remember thinking that by far the worst part of this was the cold showers, though I still feel negative affect towards ramen (and eggs, which were also cheap).) I feel like that says something about my psychology.

Fantastic post. It goes a long way toward dissolving the question.

On the left we have the external world which generates the sensory inputs our agent uses to form beliefs.

Rhetorical question one: how is the singular term "agent" justified when there is a different configuration of molecules in the space the "agent" occupies from moment to moment? Wouldn't "agents" be better? What if the agent gets hit by a non-fatal brain-altering gamma ray burst or something? There's no natural quantitative point to say we have "an a... (read more)

I would say that this first phrasing actually provides more information than the second, in that it refers to the nature of the preference, which is relevant for predicting how the agent in question might change it's preferences over time. Deliciousness tends to vary with supply, so the degree to which you prefer icecream over gorilla assault is likely to increase when you're hungry or malnourished, and decrease when you're nutritionally sated. In fact, if you were force-fed chocolate ice cream and nothing else for long enough, the preference might even reverse.
Does it imply that all possible minds find the experience of eating ice cream more delicious than being beaten by gorillas with metal bars? For that would be untrue! I question the assumption of error theorists that statements like the first have such expansive meaning, I hadn't meant to change the variable you pointed out - the reason for the preference.
My understanding is that when someone talks about matters of preference, the default assumption is that they are referring to their own, or possibly the aggregate preferences of their peer group, in part because there is little or nothing that can be said about the aggregate preferences of all possible minds.

So far, if I understand all of the content of this post correctly, this seems like a much more elegant and well-written account of my own beliefs about morality than my previous clumsy attempt at it.

The above is the anti-realist position given in terms I think Less Wrong is comfortable with. It has the following things in it's favor: it does not posit any 'queer' moral properties as having objective existence and it fully accounts for the motivating, prescriptive aspect of moral language. It is psychologically sound, albeit simplistic. It is naturalistic while providing an account of why meta-ethics is so confusing to begin with. It explains our naive moral realism and dissolves the difference between the two prominent anti-realist camps.

..and its ... (read more)

Have you ever heard of Game Theory? Because I don't see why this counsel of despair couldn't crack down some math and figure out Pareto-optimal moral rules or laws or agreements, and run with those. If they know enough about their own moralities to be a "counsel of despair", they should know enough to put down rough estimates and start shutting up.
That presupposes something like utilitariansim. If something like deontology is true, then number-crunched solutions could involve unjustifiable violations of rights.
Could you humor me for an example? What would the universe look like if "deontology is true" versus a universe where "deontology is false"? Where is the distinction? I don't see how a deontological system would prevent number-crunching. You just number-crunch for a different target: find the pareto optima that minimize the amount of rule-breaking and/or the importance of the rules broken.
What would it be like if utilitarianism is true? Or the axiom of choice? Or the continuum hypothesis? I don't see how a description of the neurology of moral reasoning tells you how to crunch the numbers -- which decision theory you need to use to implement which moral theory to resolve conflicts in the right way.
This statement seems meaningless to me. As in "Utilitarianism is true" computes in my mind the exact same way as "Politics is true" or "Eggs are true". The term "utilitarianism" encompasses a broad range of philosophies, but seems more commonly used on lesswrong as meaning roughly some sort of mathematical model for computing the relative values of different situations based on certain value assumptions about the elements of those situations and a thinghy called "utility function". If this latter meaning is used, "utilitarianism is true" is a complete type error, just like "Blue is true" or "Eggs are loud". You can't say that the mathematical formulas and formalisms of utilitarianism are "true" or "false", they're just formulas. You can't say that "x = 5" is "true" or "false". It's just a formula that doesn't connect to anything, and that "x" isn't related to anything physical - I just pinpointed "x" as a variable, "5" as a number, and then declared them equivalent for the purposes of the rest of this comment. This is also why I requested an example for deontology. To me, "deontology is true" sounds just like those examples. Neither "utilitarianism is true" or "deontology is true" correspond to well-formed statements or sentences or propositions or whatever the "correct" philosophical term is for this.
Wait, seriously? That sounds like a gross misuse of terminology, since "utilitarianism" is an established term in philosophy that specifically talks about maximising some external aggregative value such as "total happiness", or "total pleasure minus suffering". Utility functions are a lot more general than that (ie. need not be utilitarian, and can be selfish, for example).
To an untrained reader, this would seem as if you'd just repeated in different words what I said ;) I don't see "utilitarianism" itself used all that often, to be honest. I've seen the phrase "in utilitarian fashion", usually referring more to my description than the traditional meaning you've described. "Utility function", on the other hand, gets thrown around a lot with a very general meaning that seems to be "If there's something you'd prefer than maximizing your utility function, then that wasn't your real utility function". I think one important source of confusion is that LWers routinely use concepts that were popularized or even invented by primary utilitarians (or so I'm guessing, since these concepts come up on the wikipedia page for utilitarianism), and then some reader assumes they're using utilitarianism as a whole in their thinking, and the discussion drifts from "utility" and "utility function" to "in utilitarian fashion" and "utility is generally applicable" to "utilitarianism is true" and "(global, single-variable-per-population) utility is the only thing of moral value in the universe!".
Everywhere outside of LW , utilitarianism means a a moral theory. It, or some specific variation of it is therefore capable of being true or false. The point could have as well been made with some less mathematical moral theory. The truth or falsehood or moral theories doesn't have direct empirical consequences, and more than the truth or falsehood of abstract mathematical claims. Shut-up-and-calculate doesn't work here, because one is not using utilitarianism or any other moral theury for predictingwhat will happen, one is using to plan what one will do. And I can't say that f, ma and a mean something in f=ma? When you apply maths, the variables mean something. That's what application is. In U-ism, the input it happiness, or lifeyears, or soemthig, and the output is a decision that is put into practice. I don't know why you would want to say you have an explanation of morality when you are an error theorist.. I also don't know why you are an error theorist. U-ism and D-ology are rival answers to the question "what is the right way to resolve conflicts of interest?". I don't think that is a meaningless or unanswerable question. I don't see why anyone would want to pluck a formula out of the air, number-crunch using it, and then make it policy. Would you walk into a suicide booth because someone had calculated, without justifying the formula used that you were a burden to society?
I think you are making a lot of assumptions about what I think and believe. I also think you're coming dangerously close to being perceived as a troll, at least by me. Oh! So that's what they're supposed to be? Good, then clearly neither - rejoice, people of the Earth, the answer has been found! Mathematically you literally cannot do better than Pareto-optimal choices. The real question, of course, is how to put meaningful numbers into the game theory formula, how to calculate the utility of the agents, how to determine the correct utility functions for each agent. My answer to this is that there is already a set of utility functions implemented in each humans' brains, and this set of utility functions can itself be considered a separate sub-game, and if you find solutions to all the problems in this subgame you'll end up with a reflectively coherent CEV-like ("ideal" from now on) utility function for this one human, and then that's the utility function you use for that agent in the big game board / decision tree / payoff matrix / what-have-you of moral dilemmas and conflicts of interest. So now what we need is better insights and more research into what these sets of utility functions look like, how close to completion they are, and how similar they are across different humans. Note that I've never even heard of a single human capable of knowing or always acting on their "ideal utility function". All sample humans I've ever seen also have other mechanisms interfering or taking over which makes it so that they don't always act even according to their current utility set, let alone their ideal one. I don't know what being an "error theorist" entails, but you claiming that I am one seems valid evidence that I am one so far, so sure. Whatever labels float your boat, as long as you aren't trying to sneak in connotations about me or committing the noncentral fallacy. (notice that I accidentally snuck in the connotation that, if you are committing this fallacy, you
Assuming that everything of interest can be quantified,that the quantities can be aggregated and compated, and assuming that anyone can take any amount of loss for the greater assuming all the stuff that utiliatarins assume and that their opponents don't. No. You cant leap from "a reflectively coherent CEV-like [..] utility function for this one human" to a solution of conflicts of interest between agents. All you have is a set of exquisite model of individual interests, and no way of combining them, or trading them off. Strictly speaking, you are a metaethical error theorist. You think there is no meaning to the truth of falsehood of metathical claims. Any two theoris which have differing logical structure can have truth values, since they can be judged by coherence, etc, and any two theories which make differnt objectlevle predictions can likelwise have truth values. U and D pass both criteria with flying colours. And if CEV is not a meaningul metaethical theory, why bother with it? If you can't say that the output of a grand CEV number crunch is what someone should actually do, what is the point? I know. And you detemine the truth factors of other theories (eg maths) non-empirically. Or you can use a mixture. How were you porposing to test CEV?
That is simply false. Two individual interests: Making paperclips and saving human lives. Prisoners' dilemma between the two. Is there any sort of theory of morality that will "solve" the problem or do better than number-crunching for Pareto optimality? Even things that cannot be quantified can be quantified. I can quantify non-quantifiable things with "1" and "0". Then I can count them. Then I can compare them: I'd rather have Unquantifiable-A than Unquantifiable-B, unless there's also Unquantifiable-C, so B < A < B+C. I can add any number of unquantifiables and/or unbreakable rules, and devise a numerical system that encodes all my comparative preferences in which higher numbers are better. Then I can use this to find numbers to put on my Prisoners Dilemma matrix or any other game-theoretic system and situation. Relevant claim from an earlier comment of mine, reworded: There does not exist any "objective", human-independent method of comparing and trading the values within human morality functions. Game Theory is the science of figuring out what to do in case you have different agents with incompatible utility functions. It provides solutions and formalisms both when comparisons between agents' payoffs are impossible and when they are possible. Isn't this exactly what you're looking for? All that's left is applied stuff - figuring out what exactly each individual cares about, which things all humans care about so that we can simplify some calculations, and so on. That's obviously the most time-consuming, research-intensive part, too.
Would you mind giving three examples of cases where Deontology being true gives different predictions than Consequentialism being true? This is another extension of the original question posed, which you've been dodging.
Deontology says you should push the fat man under the trolley, and various other examples that are well known in the literature. I have not been "dodging" it. The question seemed to frame the issue as one of being able to predict events that are observed passively. That misses the point on several levels. For one thing, it is not the case that empirical proof is the only kind of proof. For another, no moral theory "does" anything unless you act on it. And that includes CEV.
This would still be the case, even if Deonotology was false. This leads me to strongly suspect that whether or not it is true is a meaningless question. There is no test I can think of which would determine its veracity.
Actually you deontology says you should NOT push the fat man . Consequentialism says you should. it is hard to make sense of that. If a theory is correct, then what it states is correct. D and U make opposite recommendations about the fat man, so you cannot say that they are both indiffernt with regard to your rather firm intuition about this case. Once again I will state: moral theories are tested by their ability to match moral intuition, and by their internal consistency, etc. Once again, I will ask: how would you test CEV?
Compute CEV. Then actually do learn and become this better person that was modeled to compute the CEV. See if you prefer the CEV or any other possible utility function. Asymptotic estimations could also be made IFF utility function spaces are continuous and can be mapped by similarity: If as you learn more true things from a random sample and ordering of all possible true things you could learn, gain more brain computing power, and gain a better capacity for self-reflection, your preferences tend towards CEV-predicted preferences, then CEV is almost certainly true. D(x) and U(y) make opposite recommendations. x and y are different intuitions from different people, and these intuitions may or may not match the actual morality functions inside the brains of their proponents. I can find no measure of which recommendation is "correct" other than inside my own brain somewhere. This directly implies that it is "correct for Frank's Brain", not "correct universally" or "correct across all humans". Based on this reasoning, if I use my moral intuition to reason about the the fat man trolley problem problem using D() and find the conclusion correct within context, then D is correct for me, and the same goes for U(). So let's try it! My primary deontological rule: When there exist counterfactual possible futures where the expected number of deaths is lower than all other possible futures, always take the course of action which leads to this less-expected-deaths future. (In simple words: Do Not Kill, where inaction that leads to death is considered Killing). A train is going to hit five people. There is a fat man which I can push down to save the five people with 90% probability. (let's just assume I'm really good at quickly estimating this kind of physics within this thought experiment) If I don't push the fat man, 5 people die with 99% probability (shit happens). If I push the fat man, 1 person dies with 99% probabilty (shit happens), and the 5 others still die with 10%
I suspect that defining deontology as obeying the single rule "maximize utility" would be a non-central redefinition of the term. something most deontologists would find unacceptable.
The simplified "Do Not Kill" formulation sounds very much like most deontological rules I've heard of (AFAIK, "Do not kill." is a bread-and-butter standard deontological rule). It also happens to be a rule which I explicitly attempt to implement in my everyday life in exactly the format I've exposed - it's not just a toy example, this is actually my primary "deontological" rule as far as I can tell. And to me there is no difference between "Pull the trigger" or "Remain immobile" when both are extremely likely to lead to the death of someone. To me, both are "Kill". So if not pulling the trigger leads to one death, and pulling the trigger leads to three deaths, both options are horrible, but I still really prefer not pulling the trigger. So if for some inexplicable reason it's really certain that the fat man will save the workers and that there is no better solution (this is an extremely unlikely proposition, and by default would not trust myself to have searched the whole space of possible options), then I would prefer pushing the fat man. If I considered standing by and watching people die because I did nothing to not be "Kill", then I would enforce that rule, and my utility function would also different. And then I wouldn't push the fat man either way, whether I calculate it with utility functions or whether I follow the rule "Do Not Kill". I agree that it's non-central, but IME most "central" rules I've heard of are really simple wordings that obfuscate the complexity and black boxes that are really going on in the human brain. At the base level, "do not kill" and "do not steal" are extremely complex. I trust that this part isn't controversial except in naive philosophical journals of armchair philosophizing.
I believe that this is where many deontologists would label you a consequentialist. There are certainly the complex edge cases, like minimum necessary self-defense and such, but in most scenarios the application of the rules is pretty simple. Moreover, "inaction = negative action" is quite non-standard. In fact, even if I believe that in your example pushing the fat man would be the "right thing to do", I do not alieve it (i.e. I would probably not do it if push came to shove, so to speak).
With all due respect to all parties involved, if that's how it works I would label the respective hypothetical individuals who would label me that "a bunch of hypocrites". They're no less consequentialist, in my view, since they hide behind words the fact that they have to make the assumption that pulling a trigger will lead to the consequence of a bullet coming out of it which will lead to the complex consequence of someone's life ending. I wish I could be more clear and specific, but it is difficult to discuss and argue all the concepts I have in mind as they are not all completely clear to me, and the level of emotional involvement I have in the whole topic of morality (as, I expect, do most people) along with the sheer amount of fun I'm having in here are certainly not helping mental clarity and debiasing. (yes, I find discussions, arguments, debates etc. of this type quite fun, most of the time) I'm not sure it's just a question of not alieving it. There are many good reasons not to believe evidence that this will work, and even more good reasons to believe there is probably a better option, and many reasons why it could be extremely detrimental to you in the long term to push down a fat man onto train tracks, and if push come to shove it might end up being the more rational action in a real-life situation similar to the thought experiment.
I'm quite aware of that. At this point, I simply must tap out. I'm at a loss at how else to explain what you seem to be consistently missing in my questions, but DaFranker is doing a very good job of it, so I'll just stop trying. Really? This is news to me. I guess Moore was right all along...
Error theorists are cognitivists. The sentence you quoted makes me think DaFranker is a noncognitivist (or a deflationary cognitivist,) he is precisely asking you what it would mean for U or D to have truth values. When they are both trying to give accounts of what it would mean for something to be "right", it seems this question becomes pretty silly.
I'm not sure at all what those mean. If they mean that I think there doesn't exist any sentences about morality that can have truth values, that is false. "DaFranker finds it immoral to coat children in burning napalm" is true, with more confidence than I can reasonably express (I'm about as certain of this belief about my moral system as I am in things like 2 + 2 = 4). However, the sentence "It is immoral to coat children in burning napalm" returns an error for me. You could say I consider the function "isMoral?" to take as input a morality function, a current worldstate, and an action to be applied to this worldstate that one wants to evaluate whether it is moral or not. A wrapper function "whichAreMoral?" exists to check more complicated scenarios with multiple possible actions and other fun things. See, if the "morality function" input parameter is omitted, the function just crashes. If the current worldstate is omitted, the morality function gets run with empty variables and 0s, which means that the whole thing is meaningless and not connected to anything in reality. Yes. In the example above, my "isMoral?" function can only return a truth-value when you give it inputs and run the algorithm. You can't look at the overall code defining the function and give it a truth-value. That's just completely meaningless. My current understanding of U and D is that they're fairly similar to this function. I agree somewhat. To use another code analogy, here I've stumbled upon the symbol "Right", and then I look back across the code for this discussion and I can't find any declarations or "Right = XXXXX" assignment operations. So clearly the other programmers are using different linked libraries that I don't have access to (or they forgot that "Right" doesn't have a declaration!)
An error theorist could agree with that. it isn't really a statement about morality, it is about belief. Consider "Eudoximander considers it prudent to refrain from rape, so as to avoid being torn apart by vengeful harpies". That isn't a true statement about harpies. And it doesn't matter what the morality function is? Any mapping from input to output will do? So is it meaninless that * some simulations do (not) correctly model the simulated system * some commercial software does (not) fulfil a real-world business requirment * some algorithms do (not) correctly computer mathematical functions * some games are (not) entertaining * some trading software does (not) return a profit It's worth noting that natural language is highhly contextual. We know in broad terms what it means to get a theory "right". That's "right" in one context. In this context we want a "right" theory of morality, that is a theoretically-right theory of the morally-right.
Yes. I have a standard library in my own brain that determines what I think looks like a "good" or "useful" morality function, and I only send morality functions that I've approved into my "isMoral?" function. But "isMoral?" can take any properly-formatted function of the right type as input. And I have no idea yet what it is that makes certain morality functions look "good" or "useful" to me. Sometimes, to try and clear things up, I try to recurse "isMoral?" on different parameters. e.g.: "isMoral? defaultMoralFunc w1 (isMoral? newMoralFunc w1 BurnBabies)" would tell me whether my default morality function considers moral the evaluation and results of whether the new morality function considers burning babies moral or not. I'm not sure what you mean by "it isn't really a statement about morality, it is about belief." Yes, I have the belief that I consider it immoral to coat children in napalm. This previous sentence is certainly a statement about my beliefs. "I consider it immoral to coat children in napalm" certainly sounds like a statement about my morality though. "isMoral? DaFranker_IdealMoralFunction Universe coatChildInNapalm = False" would be a good way to put it. It is a true statement about my ideal moral function that it considers it better not to coat a child in burning napalm. The declaration and definition of "better" here are inside the source code of DaFranker_IdealMoralFunction, and I don't have access to that source code (it's probably not even written yet). Also note that "isMoral? MoralIntuition w a" =/= ""isMoral? [MoralFunctionsInBrain] w a" =/= ""isMoral? DominantMoralFunctionInBrain w a" =/= ""isMoral? CurrentMaxMoralFunctionInBrain w a" =/= "isMoral? IdealMoralFunction w a". In other words, when one thinks of whether or not to coat a child in burning napalm, many functions are executed in the brain, some of them may disagree on the betterness of some details of the situation, one of those functions usually takes the lead and becomes
By comparing them to abstract formulas, which don't have truth values ... as opposed to equations, which, do, and to applied maths,which does, and theories, which do... I have no idea why you would say that. Belief in objective morality is debatable but not silly in the way belief in unicorns is. The question of what is right is also about the most important question there is.
My main point is that I haven't the slightest clue as to what kind of applied math or equations U and D could possibly be equivalent to. That's why I was asking you, since you seem to know.
I am not assuming they have to be implemented mathematically. And I thought you problem is that you didn;t have a procedure for identifying corect theories of morality?
I'll concede I may have misinterpreted them. I guess we shall wait and see what DF has to say about this. I never said belief in "objective morality" was silly. I said that trying to decide whether to use U or D by asking "which one of these is the right way to resolve conflicts of interest?" when accepting one or the other necessarily changes variables in what you mean by the word 'right' and also, maybe even, the word 'resolve', sounds silly.
That woudl be the case of "right way" meant "morally-right way". But metaethical theories aren't compared by object-level moral rightness, exactly. They can be compared by coherence, practicallity, etc. If metaethics were just obviously unsolveable, someone would have noticed.
That's just how I understand that word. 'Right for me to do' and 'moral for me to do' refer to the same things, to me. What differs in your understanding of the terms? Remind me what it would look like for metaethics to be solved?
eg. mugging an old lady is the instrumentally-right way of scoring my next hit of heroin, but it isn't morally-right. Unsolved-at-time-T doesn't mean unsolvable. Ask Andrew Wyles.
Just like moving queen to E3 is instrumentallly-right when playing chess, but not morally right. The difference is that in the chess and heroin examples, a specific reference point is being explicitly plucked out of thought-space (Right::Chess; Right::Scoring my next hit,) which doesn't refer to me at all. Mugging an old woman may or may not be moral, but deciding that solely based on whether or not it helps me score heroin is a category error. I'm no good at math, but it's my understanding that there was an idea of what it would look like for someone to solve Fermat's Problem even before someone actually did so. I'm skeptical that 'solving metaethics' is similar in this respect.
You seem to have intpreted that the wrong way round. The point was that there are different and incompatible notions of "right". Hence "the right theory of what is right to do" is not circular, so long as the two "rights" mean differnt things. Whcih they do (theorertical correctness and moral obligation, respectively). No one knows what a good explanation looks like? But then why even bother with things like CEV, if we can't say what they are for?
I think you've just repeated his question.

For the definition of "moral" that includes how people tend to use the term, this seems about right. However, the word "morality" is used in many different ways. For example, the "morality" I think about when I am legitimately wondering what action I should take - and not letting just an emotional reaction guide my actions - is in the ideal map (it's my preferences).

If your preferences were different (say you had a genuine preference to murder innocent people) would that change what is moral?
Nope. Define original preferences as moral1 and murder preferences as moral2. I'm asking what is moral1 to do, and that doesn't change if my preferences change. What changes is the question I ask (what is moral2 to do?).
Okay, then your morality isn't different from what I outlined here. You're just maybe less emotional about it (I probably overemphasized the matter of emotions in the post). When evaluating morality in the counterfactual a realist would have to look at facts in that world. You project your internal preferences on any proposed counterfactuals. Put another way- I'm guess you don't think your preferences justify a claim about whether or not a given action is moral. Rather, what you mean when you say some action is immoral is that the action is against (some subset of) your preferences. That sound right?
That does sound right, but moral realism could still be true - actually, the term "moral" is meaningless in our example. Moral1 realism and moral2 realism are what's at stake. Consider two scenarios - scenario1 and scenario2. In scenario1 my preferences are moral1 and in scenario2 my preferences are moral2. In scenario1, moral1 exists in the ideal map - my preferences are an instantiation of moral1 - so moral1 realism is true. Moral2 realism may or may not be true, depending on whether some other agent has those preferences. Similarly, in scenario2, moral2 realism is true and moral1 realism may or may not be true.
As written this isn't clear enough for me to make sense of.
Let me try to be more clear then. Your definition of moral realism is: In our toy example, there is morality1 and morality2. Morality1 is my current preferences, morality2 is my current preferences + a preference for murder. So is moral1 realism true? What about moral2 realism? Consider scenario1. Under this scenario, my preferences are my actual current preferences, i.e. they are morality1. Now we return to the questions. Is moral1 realism true? Well, my preferences are a subset of the ideal map and in this scenario my preferences the same as morality1, so yes, moral1 realism is true. Is moral2 realism true? My preferences are not the same as morality2, but someone else's preferences could be, so we don't have enough information to decide this statement. Scenario2, where my preferences are the same as morality2, is analogous (moral2 realism is true, moral1 realism is undecidable without further information). Is that clearer?
It sounds like you are arguing for meta-ethical relativism where whether or not a moral judgment is true or false is contingent on the preferences of the speaker making the moral judgment. Is that right?
Not really. Whether a moral judgement is true or false is contingent on the definition of moral. If I say "what you're doing is bad!" I probably mean "it's not moral1" where moral1 is my preferences. If the hypothetical-murder-preferring-me says "what you're doing is bad!" this version of me probably means "it's not moral2" where moral2 are those preferences. But those aren't the only definitions I could be using and in fact it's often ambiguous which definition a given speaker is using (even to the speaker). For example, in both cases in the above paragraph when I say "what you're doing is bad" I could simply mean "what you're doing goes against the traditional morality taught and/or practiced in this region" or "what you're doing makes me have a negative emotional reaction." To answer the relativism question, you have to pin down the definition of moral. For example, suppose by "moral" we simply mean clippy's utility function, i.e. moral = paperclip maximizing. Now suppose clippy say's "melting down 2 tons of paper clips is immoral." Is clippy right? Of course he is, that's the definition of immoral. Now suppose I say the sentence. Is it still true? It sure is since we pinned down the meaning of moral beforehand. If we substitute my own (much more complicated) utility function for clippy's as the definition of moral in this example, it becomes harder to evaluate whether or not something is moral, but the correct answer still won't depend on who's asking the question since "moral" is a rigid designator.
Of course. But it just doesn't solve anything to recognize that 'moral' could be defined anyway you like. There are actual social and linguistic facts about how moral language functions. The problem of meta-ethics, essentially, is that those facts happen to be paradoxical. Saying, "my definition of moral is just my preferences" doesn't solve the problem because that isn't anyone else's definition of moral and most people would not recognize it as a reasonable definition of moral. The metaethical answer consistent with that position might be "everyone (or lots of people) mean different things by moral". That position is anti-realist, just instead of being skeptical about the metaphysics you're skeptical of the linguistics- you don't think there is a shared meaning for the word. As an aside: I find that position less plausible than other versions of anti-realism (people seem to agree on the meaning of moral but disagree on which actions, persons and circumstances are part of the moral and immoral sets.).
The first part of our disagreement is because either you're implicitly using a different definition of moral antirealism than the one in your post or I just don't understand your definition like you intended. Whatever the case may be, lets set that aside - I'm pretty sure I know what you mean now and concede that under your definition - which is reasonable - moral antirealism is true even for the way I was using the term. I'm not saying that there aren't limits on what definitions of "moral" are reasonable, but the fact remains that the term is used in different ways by different people in different times - or at least it's not obvious that they mean the same thing by moral. Your post goes a long way towards explaining some of those uses, but not all. Well if you think you have found a moral paradox, it may just be because there are two inconsistent definitions of "moral" in play. This is often the case with philosophical paradoxes. But more to the point, I'm not sure whether or not I disagree with you here because I don't know what paradoxes you are talking about. As an aside, I'd say they disagree on both. They often have different definitions in mind and even when they have the same definition, it isn't always clear whether something is "moral" or "immoral." The latter isn't necessarily an antirealist situation - both parties may be using the same definition and morality could exist in the ideal map (in the sense that you want), yet it may be difficult in practice to determine whether or not something is moral or not.
Maybe. I was aiming for dominant usage but I think dominant usage in the general public turned out to not be dominant usage here which is part of why the post wasn't all that popular :-) To be clear- it's not moral paradoxes I'm worried about. I've said nothing and have few opinions about normative ethics. The paradoxical nature of moral language is that it has fact-like aspects and non-fact like aspects. The challenge for the moral realist is to explain how it gets it's non-fact like aspects. And the challenge for the moral anti-realist is to explain how it gets it's fact like aspects. That's what I was trying to do in the post. I don't think there are common uses of moral language which don't involve fact-like aspects and non-fact like aspects. Fact-like: we refer to moral claims as being true or false, grammatically they are statements, they can figure in logical proofs, changing physical conditions can change moral judgments (you can fill in more). Non-fact-like: categorically motivating (for undamaged brains at least), normative/directional like a command, epistemologically mysterious, in some accounts metaphysically mysterious, subject of unresolvable contention (you can fill in more)
You are so much better than me at saying what I think. =O

What does the arrow "Projected" mean? Why isn't there another arrow "Beliefs" to "The Map"?

The entire green circle on the right is just a zoomed in version of the green circle on the left. The 'projected' arrow is just what the projectivist thesis is (third subsection). The idea is that our moral beliefs are formed by a basically illegitimate mechanism by projecting our utility function onto the external world. There isn't a an arrow from "beliefs" to "the Map" because those are the same thing.
Good clarification, I now am pretty sure I understand how our beliefs relate. I am suggesting that our moral beliefs are formed by a totally legitimate mechanism by projecting our utility function onto the external world. If X is a zoomed-in version of Y, you can't project Z into X. Either Z is part of Y, in which case it's part of X, or it isn't part of Y, in which case it's not part of X.
I'm pretty confused by this comment, you'll have to clarify. If our moral beliefs are formed by projecting our utility function onto the external world I'm unsure of what you could mean by calling this process "legitimate". Certainly it doesn't seem likely to be a way to form accurate beliefs about the world. Z is projected into X/Y. It's just too small to see in Y and I didn't think more arrows would clarify things.
"projected onto the external world" isn't really correct. Moral beliefs don't, pretheoretically, feel like specific beliefs about the external world. You can convince someone that moral beliefs are beliefs about God or happiness or paperclips or whatever, but it's not what people naturally believe. What I want to suggest is that moral beliefs ARE your utility function (and to the extent that your brain doesn't have a utility function, they're the closest approximation of one). Otherwise, in the diagram, there would be two identical circles in your brain, one labeled "moral beliefs" and the other labeled "utility function". Thus, it is perfectly legitimate for your moral beliefs to be your utility function.
Often pre-theoretic moral beliefs are entities unto themselves, like laws of nature sort of. People routinely think of morality as consisting of universal facts which can be debated. Thats what makes them "beliefs". As far as I know nearly everyone is a pre-theoretic moral realist. Of course, moral beliefs might not feel quite the same as say beliefs about whether or not something is dog. But they can still be beliefs. Recall: A utility function doesn't constrain future experiences. Thats the reason for the conceptual distinction between beliefs and preferences. The projection of our utility function onto our map of the external world (which turns the utility function into a set of beliefs) is illegitimate because it isn't a reliable way of forming accurate beliefs that correspond to the territory. If you want to just use the word 'belief' to also describe moral principles that seems okay as long as you don't confuse them with beliefs proper. In any case, it sounds like we're both anti-realists.
The reason I want to do this is because things like logically manipulating moral beliefs / preferences in conjunction with factual beliefs / anticipations makes sense. But I think this is our disagreement: You say it's illegitimate because it doesn't constrain future experiences. If it constrained future experiences incorrectly, I would agree that it was illegitimate. If it was trying to constrain future experiences and failing, that would also be illegitimate. But the point of morality is not to constrain our experiences. The point of morality is to constrain our actions. And it does that quite well.
Agreed! But that means morality doesn't consist in proper beliefs! You can still use belief language if you like, I do.
And doing so is legitimate and not illegitimate.
Sure. What is illegitimate is not the language but thinking that one's morality consists in proper beliefs.

Well done. Here's a way to bridge the is-ought distinction: It's possible that investigating our map of our morality — that is, the littlest blue circle in your diagram — will yield a moral argument that we find compelling.

x(x {is an element of} IM) & (x = M)

Shouldn't x be a subset of IM rather than an element?

Also, do you somewhere define what the ideal map is?

To the first, I just changed it. To the second, I was attempting to do that in the first section, though obviously not formally. What I mean by it is the map that corresponds to the territory at the ideal limit Einstein is talking about.
Well, yes, but, is IM set of sentences compatible with experimental results, or set of sentences whose negation is incompatible with the results? What about sentences speaking about abstract concepts, not directly refering to experimental results?
This depends on your philosophy of science and what science ultimately decides exists. The exact answer doesn't really matter for the purposes of the post and it's a huge question that I probably can answer adequately in a comment. Basically, it's what would be in a universal theory of science. I'm not constraining it to eliminative reductionism- i.e. I have no problem including truths of economics and biology in IM in addition to truths about physics. Certainly the conjunction of 'sentences compatible with experimental results' and 'sentences whose negation is incompatible with experimental results' is too broad (are those sets different?). We would want to trim that set with criteria like generality and parsimony.
My concern was mainly with propositions which aren't tied to observation, albeit being true in some sense. Mathematical truths are one example, moral truths may be another. The language is presumably able to express any fact about the territory, but there is no clear reason that any expression of language represents a fact about the territory. The language may be broader. Therefore, seems a bit unwarranted. Morality could be only in the map.
Now I'm confused. Beliefs that don't correspond to the territory are what we call "wrong".
Is "this sentence is true" a wrong belief?
Its a classic problem case. I think its semantic function calls itself and so it is meaningless. See here.
I understand why people might think this was a snarky and downvote worthy comment with an obvious answer, but I greatly appreciated this comment and upvoted it. That is to say, it fits a pattern for questions the answers of which are obvious to others, though the answer was not obvious to me. What's worse, at first thought, within five seconds of thinking about it, the answer seemed obvious to me until I thought about it a bit more. Even though I have tentatively settled upon an answer basically the same as the one I thought up in the first five seconds, I believe that that first thought was insufficiently founded, grounded, and justified until I thought about it.
Just to clarify, I wanted to point out that sentences are not the same category as beliefs (which in local parlance are anticipations of observations). There can be gramatically correct sentences which don't constrain anticipations at all, and not only the self-referential cases. All mathematical statements somehow fall in this category, just imagine, what observations one anticipates because believing "the empty set is an empty set". (The thing is a little complicated with mathematical statements because, at least for the more complicated theorems, believing in them causes the anticipation of being able to derive them using valid inference rules.) Mathematical statements are sometimes (often) useful for deriving propositions about the external world, but themselves don't refer to it. Without further analysing morality, it seems plausible that morality defined as system of propositions works similarly to math (whatever standards of morality are chosen). The question is, whether this should be included into the ideal map. To peruse the analogy with customary geographic maps, mathematical statements would refer to descriptions of regularities about the map, such as "if three contour lines make nested closed circles, the middle one corresponds to height between the heights of the outermost one and the innermost one". Such facts aren't needed to read the map and are not written there. If my remark seemed snarky, I apologise.
What's the distinction between the two? (Useful for deriving propositions about smth vs. referring.)
The derived "propositions about" are distinct from the mathematical statements per se. For example: * Mathematical statement: "2+2 = 4" (nothing more than a theorem in a formal system; no inherent reference to the external world). * Statement about the world: "by the correspondence between mathematical statements and statements about the world given by the particular model we are using, the mathematical statement '2+2=4' predicts that combining two apples with two apples will yield four apples".
If you build an inference system that outputs statements it proves, or lights up a green (red) light when it proves (disproves) some statement, then your anticipations about what happens should be controlled by the mathematical facts that the inference system reasons about. (More easily, you may find that mathematicians agree with correct statements and disagree with incorrect ones, and you can predict agreement/disagreement from knowledge about correctness.)
That's why I have said "[t]he thing is a little complicated with mathematical statements because, at least for the more complicated theorems, believing in them causes the anticipation of being able to derive them using valid inference rules". The latter are sentences which directly mention the object ("the planet moves along an elliptic trajectory") while the former are statements that don't ("an ellipse is a closed curve"). Perhaps a better distinction would be based on the amount of processing between the statement and sensory inputs; on the lowest level we'll find sentences which directly speak about concrete anticipations ("if I push the switch, I will see light"), the higher level statements would contain abstract words defined in terms of more primitive notions. Such statements could be unpacked to gain a lower-level description by writing out the definition explicitly ("the crystal has O_h symmetry" into "if I turn the crystal 90 degress, it will look the same and if I turn it 180 degrees..."). If a statement can be unpacked in finite number of recursions down to the lowest level containing no abstractions, I would say it refers to the external world.
This doesn't look to me like a special condition to be excused, but as a clear demonstration that mathematical truths can and do constrain anticipation.
"Directly mentioning" passes the buck of "referring", you can't mention a planet directly, the planet itself is not part of the sentence. I don't see how to make sense of a statement being "unpacked in finite number of recursions down to the lowest level containing no abstractions" (what's "no abstractions", what's "unpacking", "recursions"?). (I understand the distinction between how the phrases are commonly used, but there doesn't appear to be any fundamental or qualitative distinction.)
There has to be a definition of base terms standing for primitive actions, observations and grammatical words (perhaps by a list, to determine what to put on the list would ideally need some experimental research of human cognition). An "abstraction" is then a word not belonging to the base language defined to be identical to some phrase (possibly infinitely long) and used as an abbreviation thereof. By "unpacking" I mean replacing all abstractions by their definitions.

Here's what a moral realist might say:

  1. The 'morality' module within the utility function is pretty similar across all humans.

  2. Given that our evolved morality is in part used to solve cooperation and other game theoretic problems, a rational psychopath might want to self-modify to care about 'morality'.

I would expect a rational psychopath to instead try to study game theory, and try to beat human players that will employ predictable strategies that can be exploited.
If there's a long-term effective strategy for cheating--one that doesn't involve the cheater being detected and punished--why isn't everone using it?
Because we evolved to care about things like fairness in an environment where everyone knew each other, and if you cheated someone, everyone else in the village knew it. And, modern humans still employ their evolved instincts. Therefore, agents who lack moral concerns can exploit the fact that humans are using intuitions that were optimized to work in a different situation. For instance, they can avoid doing things so heinous that society as a whole tries to hunt them down, and once they have exploited someone, they can just move.

I see a few broken image links, eg in "Moral Realism: x(x IM) & (x = M)" there is a broken image graphic.

Thats actually your browser but I'll turn it into html in a second.