My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.

There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.

Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.

I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.

New to LessWrong?

New Comment
75 comments, sorted by Click to highlight new comments since: Today at 6:38 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Take some form of consequentialism, precompute a set of actions which cover 90% of the common situations, call them rules, and you get a deontology (like the ten commandments). Which works fine unless you run into the 10% not covered by the shortcuts, and until the world changes significantly enough that what used to be 90% becomes more like 50 or even 20.

4Carinthium10y
The trouble with this is that it contradicts the Reason for Being Moral In the First Place, as outlined in Elizier's metaethics. Said reason effectively comes down to obeying moral instincts, after all. WHY said morality came about is irrelevant. What's important is that it's there.
1shminux10y
Presumably you mean the post The Bedrock of Morality: Arbitrary?, which is apparently some form of moral realism, since it refuses to attach e- prefix (for Eliezer) or even h- prefix to his small subset of possible h-shoulds. I have not been able to understand why he does it, and I don't see what h-good dropping it does. But then I have not put a comparable effort into learning or developing metaethics. I disagree with that. Morality changes from epoch to epoch, from place to place and from subculture to subculture. There is very little "there", in our instincts. Some vague feelings of attachment, maybe. Most humans can be easily brought up with almost any morality.
-1Carinthium10y
There are two seperate questions here, then- one of your metaethics and it's defensibility, and Your Metaethics: Request clarification before we proceed. Eliziers: Elizier's metaethics has a problem that your defence cannot solve, and which, I think you would agree, is not addressed by your argument. I'm more referring to The Moral Void, and am emphasisng that. This shows the fact that morality changes from culture to culture is almost irrelevant- morality constantly changing does not counter any of Elizier's arguments there.
0shminux10y
Sorry, not idea what you are on about. Must be some inferential gap.
0Carinthium10y
On point 1: Why are you moral? The whole argument I've been trying to make this thread is that Elizier's moral ideas, though he doesn't realise it, lead naturally to deontology. You seem to disagree with Elizier's metaethics, so yours becomes a point of curiosity. Elizier's: Clearly your post doesn't work as a defence of Elizier's metaethics, as it was not to meant to be, for the most part. But the last pargraph is an exception. I'll try a different approach. There is always A morality, even if said morality changes. However, "The Moral Void" and it's basic argument still works because people want to be moral even if there are no universially compelling arguments.
1shminux10y
What do you mean by moral? Why my behavior largely conforms to the societal norms? Because that is how I was brought up and it makes living among humans easier. Or are you asking why the societal norms are what they are?
0Carinthium10y
There is a difference between descriptive and prescriptive. One represents the reasosn why we act. Prescriptive represents the reasons why we SHOULD act. Or in this case, having reflected upon the issue why you want to be moral instead of, say, trying to make yourself as amoral as possible. Why would you reject that sort of course and instead try to be moral? That's the proper basis of a metaethical theory, from which ethical choices are made.
1shminux10y
I thought I answered that. Descriptive: I was brought up reasonably moral (= conforming to the societal norms), so it's a habit and a thought pattern. Prescriptive: it makes living among humans easier (rudimentary consequentialism). Rejecting these norms and still having a life I would enjoy would be hard for me and requires rewiring my self-esteem to no longer be linked to being a good member of society. Habits are hard to break, and in this case it's not worth it. I don't understand what the fuss is about.
-2Carinthium10y
You're a rationalist- you've already had some experience at self-rewiring. Plus, if you're a decent liar (and that's not so hard- there's a good enough correlation between lying and self-confidence you can trigger one through the latter, plus you're intelligent), then you can either use strategic lies to get up the career ladder or skive on social responsibilities and become a hedonist.
-2shminux10y
OK, one last reply, since we are not getting anywhere, I keep repeating myself: it does not pay for me to attempt to become "amoral" to get happier. See also this quote. Tapping out.

Possible consequentialist response: our instincts are inconsistent: i.e., our instinctive preferences are intransitive, not subject to independence of irrelevant alternatives, and pretty much don't obey any "nice" property you might ask for. So trying to ground one's ethics entirely in moral instinct is doomed to failure.

There's a good analogy here to behavioral economics vs. utility maximization theory. For much the same reason that people who accept gambles based on their intuitions become money pumps (see: the entire field of behavioral econom... (read more)

0lmm10y
I think this thought is worth pursuing in more concrete detail. If I prefer certainly saving 400 people to .8 chance of saving 500 people, and prefer .2 chance of killing 500 people to certainly killing 100 people, what crazy things can a competing agent get me to endorse? Can you get me to something that would be obviously wrong even deontologically, in the same way that losing all my money is obviously bad even behavioral-economically?
0gjm10y
If you have those preferences, then presumably small enough changes to the competing options in each case won't change which outcome you prefer. And then we get this: Competing Agent: Hey, Imm. I hear you prefer certainly saving 399 people to a 0.8 chance of saving 500 people. Is that right? Imm: Yup. Competing Agent: Cool. It just so happens that there's a village near here where there are 500 people in danger, and at the moment we're planning to do something that will save them with probability 80% of the time but otherwise let them all die. But there's something else we could do that will save 399 of them for sure, though unfortunately the rest won't make it. Shall we do it? Imm: Yes. Competing Agent: OK, done. Oh, now, I realise I have to tell you something else. There's this village where 100 people are going to die (aside: 101, actually,but that's even worse, right?) because of a dubious choice someone made. I hear you prefer a 20% chance of killing 499 people to the certainty of killing 100 people; is that right? Imm: Yes, it is. Competing Agent: Right, then I'll get there right away and make sure they choose the 20% chance instead. At this point, you have gone from losing 500 people with p=0.8 and saving them with p=0.2, to losing one person for sure and then losing the rest with p=0.8 and saving them with p=0.2. Oops. [EDITED to clarify what's going on at one point.]
0lmm10y
Well sure. But my position only makes sense at all because I'm not a consequentialist and don't see killing n people and saving n people as netting out to zero, so I don't see that you can just add the people up like that.
0gjm10y
Perhaps it wasn't clear -- those two were the same village. So I'm not adding up people, I'm not assuming that anything cancels out with anything else. I'm observing that if you have those (inconsistent) preferences, and if you have them by enough of a margin that they can be strengthened a little, then you end up happily making a sequence of changes that take you back to something plainly worse than where you started. Just like getting money-pumped.
0Carinthium10y
Firstly, a deontological posistion distinguishes between directly killing people and not saving them- killing innocent people is generally an objective moral wrong. Your scenario is deceptive because it seems to lmm that innocents will be killed rather than not saved. More importantly, Elizier's metaethics is based on the premise that people want to be moral. That's the ONLY argument he has for a metaethics that gets around the is-ought distinction. Say for the sake of argument a person has a course of action compatible with deontology v.s one compatible with consequentialism and that are their choices. Shouldn't they ignore the stone tablet and choose the deontological one if that's what their moral intuitions say? Elizier can't justify not doing so without contradiciting his original premise.
2gjm10y
(Eliezer.) So, I wasn't attempting to answer the question "Are deontologists necessarily subject to 'pumping'?" but the different question "Are people who work entirely off moral intuition necessarily subject to 'pumping'?". Imm's question -- if I didn't completely misunderstand it, which of course I might have -- was about the famous framing effect where describing the exact same situation two different ways generates different preferences. If you work entirely off intuition, and if your intuitions are like most people's, then you will be subject to this sort of framing effect and you will make the choices I ascribed to Imm in that little bit of dialogue, and the result is that you will make two decisions both of which look to you like improvements, and whose net result is that more people die. On account of your choices. Which really ought to be unacceptable to almost anyone, consequentialist or deontologist or anything else. I wasn't attempting a defence of Eliezer's metaethics. I was answering the more specific question that (I thought) Imm was asking.
0lmm10y
I did mean I was making a deontological distinction between saving and killing, not just a framing question (and I didn't really mean that scenario specifically, it was just the example that came to mind - the general question is the one I'm interested in, it's just that as phrased it's too abstract for me) Sorry for the confusion.
0Carinthium10y
Consequentialist judgements are based on emotion in a certain sense just as much as deontological judgements- both come from an emotive desire to do what is "right" (for a broad definition of "right") which cannot be objectively justified using a universially compelling argument. It is true that they come from different areas of the brain, but to call consequentialist judgements "inherently rational" or similiar is a misnomer- from a philosophical perspective both are in the same metaphorical boat as both are based on ultimately unjustifiable premises objectively (see Hume Is/Ought). Assuming a deontological system is based on the premise "These are human instincts, so we create a series of rules to reflect them", then there is no delusion and hence no rationalisation. It is instead a logical extension, they would argue, of rules similiar to those Elizier made for being moral in the first place. Elizier's metaethics, as I think best summed up in "The Moral Void" is that people have a desire to do what is Right, which is reason enough for moraltiy. This argument cannot be used as an argument for violating moral instincts without creating a contradiction.
2lmm10y
But Yudkowsky does seem to think that we should violate our moral instincts - we should push the fat man in front of the tram, we should be more willing to pay to save 20,000 birds than to save 200. Our position on whether it's better to save 400 people or take a chance of saving 500 people should be consistent with our position on whether it's better to kill 100 people or take a chance of killing 500 people. We should sell all our possessions except the bare minimum we need to live and give the rest to efficient charity. If morality is simply our desire to do what feels Right, how can it ever justify doing something that feels Wrong?
2wedrifid10y
Yudkowsky does not advocate this. Nor does he practice it. In fact he does the opposite---efficient charity gives him money to have more than the bare minimum needed to live (and this does not seem unwise to me).
1Carinthium10y
This is a very good summary of the point I'm trying to make, though not the argument for making it. Better than mine for what it does.
[-][anonymous]10y50

I have a rant on this subject that I've been meaning to write.

Deontology, Consequentialism, and Virtue ethics are not opposed, just different context, and people who argue about them have different assumptions. Basically:

Consequence:Agents :: Deontology:People :: Virtue:Humans

To the extent that you are an agent, you are concerned with the consequences of your actions, because you exist to have an effect on the actual world. A good agent does not make a good person, because a good agent is an unsympathetic sociopath, and not even sentient.

To the extent that... (read more)

2wedrifid10y
Totally agree. In fact, I go as far as to declare that Deontologic value systems and Consequentialist systems can be converted between each other (so long as the system of representing consequentialist values is suitably versatile). This isn't to say such a conversion is always easy and it does rely on reflecting off an epistemic model but it can be done. I'm not sure this is true. Why can't we can something that doesn't care about consequences an agent? Assuming, of course, that they are a suitably advanced and coherent person? Like take a human deontologist who stubbornly sticks to the deontological values and ignores consequences then dismiss as irrelevant that small part of them that feels sad about the consequences. That still seems to deserve being called an agent. I'd actually say a person shouldn't help an injured bird. Usually it is better from both an efficiency standpoint and a humanitarian standpoint to just kill it and prevent short term suffering and negligible long term prospects of successfully recovering to functioning in the wild. But part of my intuitive experience here is that my intuitions for what makes a 'good person' has been corrupted by my consequentialist values to a greater extent that in has for some others. Sometimes my efforts at social influence and behaviour are governed somewhat more than average by my decision-theory intuitions. For example my 'should' advocates lying in some situations where others may say people 'shouldn't' lie (even if they themselves lie hypocritically). I'm curious Nyan. You're someone who has developed an interesting philosophy regarding ethics in earlier posts and one that I essentially agree with. I am wondering to what extent your instantiation of 'should' makes no sense from a consequentialist POV. Mine mostly makes sense but only once 'ethical inhibitions' and consideration of second order and unexpected consequences are accounted for. Some of it also only makes sense in consequentialist frameworks wh
2kalium10y
As for helping birds, it depends on the type of injury. If it's been mauled by a cat, you're probably right. But if it's concussed after flying into a wall or window---a very common bird injury---and isn't dead yet, apparently it has decent odds of full recovery if you discourage it from moving and keep predators away for an hour or few. (The way to discourage a bird from moving and possibly hurting itself is to keep it in a dark confined space such as a shoebox. My roommate used to transport pigeons this way and they really didn't seem to mind it.) Regarding the rest of the post, I'll have to think about it before coming up with a reply.
0wedrifid10y
Thankyou, I wasn't sure about that. My sisters and I used to nurse birds like that back to health where possible but I had no idea what the prognosis was. I know that if we found any chicks that were alive but displaced from the nest they were pretty much screwed once we touched them due to contamination with human-smell causing rejection. More recently (now that I'm in Melbourne rather than on a farm) the only birds that have hit my window have broken their neck and died. They have been larger birds so I assume the mass to neck-strength ratio is more of an issue. For some reason most of the birds here in the city manage to not fly into windows anywhere near as often as the farm birds. I wonder if that is mere happen-stance or micro-evolution at work. Cities have got tons more windows than farmland does after all.
3kalium10y
Actually the human-scent claim seems to be a myth. Most birds have a quite poor sense of smell. Blog post quoting a biologist. Snopes.com confirms. However, unless they're very young indeed it's still best to leave them alone:
2wedrifid10y
Oh, we were mislead into taking the correct action. Fair enough I suppose. I had wondered why they were so sensitive and also why the advice was "don't touch" rather than "put on gloves". Consider me enlightened. (Mind you the just so story justifying the myth lacks credibility. It seems more likely that the myth exists for the usual reason myths exist and the positive consequences are pure coincidence. Even so I can take their word for it regarding the observable consequences if not the explanation.)
1Carinthium10y
I can see how to convert a Consequentialist system into a series of Deontological rules with exceptions. However, not all Deontological systems can be converted to Consequentialist systems. Deontological systems usually contain Absolute Moral Wrongs which are not to be done no matter what, even if they will lead to even more Absolute Moral Wrongs.
0wedrifid10y
In the case of consequentialists that satisfy the VNM axioms (the only interesting kind) they need only one Deontological rule, "Maximise this utility function!". I suggest that they can. With the caveat that the meaning attributed to the behaviours and motivations will be different, even thought the behaviour decreed by the ethics is identical. It is also worth repeating with emphasis the disclaimer: The requirement for the epistemic model is particularly critical to the process of constructing the emulation in that direction. It becomes relatively easy (to conceive, not to do) if you use an evaluation system that is compatible with infinitesimals. If infinitesimals are prohibited (I don't see why someone would prohibit that aspect of mathematics) then it becomes somewhat harder to create a perfect emulation. Of course the above applies when assuming those VNM axioms once again. Throw those away and emulating the deontological system reverts to being utterly trivial. The easiest translation from deontological rules to a vnm-free consequentialst system would be a simple enumeration and ranking of possible permutations. The output consequences ranking system would be inefficient and "NP-enormous" but the proof-of-concept translation algorithm would be simple. Extreme optimisations are almost certainly possible.
0Carinthium10y
1- Which is by definition not deontological. 2- A fairly common deontological rule is "Don't murder an innocent, no matter how great the benefit." Take the following scenario: -A has the choice to kill 1 innocent to stop B killing 2 innocents, when B's own motive is to prevent the death of 4 innocents. B has no idea about A, for simplicity's sake. Your conversion would have "Killing innocents intentionally" as an evil, and thus A would be obliged to kill the innocent.
3wedrifid10y
No! When we are explicitly talking about emulating one ethical system in another a successful conversion is not a tautological failure just because it succeeds. This is not a counter-example. It doesn't even seem to be an especially difficult scenario. I'm confused. Ok. So when A is replaced with ConsequentialistA, ConsequentialistA will have a utility function which happens to systematically rank world-histories in which ConsequentialistA executes the decision "intentionally kill innocent" at time T as lower than all world-histories in which ConsequentialistA does not execute that decision (but which are identical up until time T). No, that would be a silly conversion. If A is a deontological agent that adheres to the rule "never kill innocents intentionally' then ConsequentialistA will always rate world histories descending from this decision point in which it kills innocents to be lower than those in which it doesn't. It doesn't kill B. I get the impression that you are assuming ConsequentialistA to be trying to rank world-histories as if the decision of B matters. It doesn't. In fact, the only aspects of the world histories that ConsequentialistA cares about at all are which decision ConsequentialistA makes at one time and with what information it has available. Decisions are something that occur within physics and so when evaluating world histories according to some utility function a VNM-consequentialist takes into account that detail. In this case it takes into account no other detail and even among such details those later in time are rated infinitesimal in significance compared to earlier decisions. You have no doubt noticed that the utility function alluded to above seems contrived to the point of utter ridiculousness. This is true. This is also inevitable. From the perspective of a typical consequentialist ethic we should expect typical deontological value system to be utterly insane to the point of being outright evil. A pure and naive consequential
2Carinthium10y
Alright- conceded.
2Protagoras10y
More moves are possible. There is the agent-relative consequentialism discussed by Doug Portmore; if a consequence counts as overridingly bad for A if it involves A causing an innocent death, and overridingly bad for B if it involves B causing an innocent death (but not overridingly bad for A if B causes an innocent death; only as bad as normal failures to prevent preventable deaths), then A shouldn't kill one innocent to stop B from killing 2, because that would produce a worse outcome for A (though it would be a better outcome for B). I haven't looked closely at any of Portmore's work for a long time, but I recall being pretty convinced by him in the past that similar relativizing moves could produce a consequentialism which exactly duplicates any form of deontological theory. I also recall Portmore used to think that some form of relativized consequentialism was likely to be the correct moral theory; I don't know if he still thinks that.
0wedrifid10y
I've never heard of Doug Portmore but your description of his work suggests that he is competent and may be worth reading. This seems overwhelmingly likely. Especially since the alternatives that seem plausible can be conveniently represented as instances of this. This is certainly a framework in which I evaluate all proposed systems of value. When people propose things that are not relative (such crazy things as 'total utilitarianism') then I intuitively think of that in terms of a relative consequentialist system that happens to arbitrarily assert that certain considerations must be equal.
1shminux10y
You mean, by habits and instincts, right?
0Ben Pace10y
If you were to have a rant, you might have to give more examples, or more thorough ones, because I'm not quite getting your explanation. I'm just posting this because otherwise there would've been an inferential silence.

I dispute the claim that the default human view is deontological. People show a tendency to prefer to apply simple, universal rules to small scale individual interactions. However, they are willing to make exceptions when the consequences are grave (few agree with Kant that it's wrong to lie to try to save a life). Further, they are generally in favor of deciding large scale issues of public policy on the basis of something more like calculation of consequences. That's exactly what a sensible consequentialist will do. Due to biases and limited informa... (read more)

-1Carinthium10y
Presumably you think that in a case like the fat man case, the human somehow mistakenly believes the consequences for pushing the fat man will be worse? In some cases you have a good point, but that's one of the ones where your argument is least plausible.
1Protagoras10y
I don't think that the person mistakenly believes that the consequences will be sufficiently worse, but something more like that the rule of not murdering people is really really important, and the risk that you're making a mistake if you think you've got a good reason to violate it this time is too high. Probably that's a miscalculation, but not exactly the miscalculation you're pointing to. I'm also just generally suspicious of the value of excessively contrived and unrealistic examples.
0Carinthium10y
I'll take two broader examples then- "Broad Trolley cases", cases where people can avert a harm only at the cost of triggering a lesser harm but do not directly cause it, and "Broad Fat Man Cases", which are the same except such a harm is directly caused. As a general rule, although humans can be swayed to act in Broad Fat Man cases they cannot help but feel bad about it- much less so in Broad Trolley cases. Admittedly this is a case in which humans are inconsistent with themselves if I remember correctly as they can be made to cause such a harm under pressure, but practically none consider it the moral thing to do and most regret it afterwards- the same as near-mode defections from group interests of a selfish nature.

(You systematically misspell 'Eliezer', in the post and in the comments.)

-2Carinthium10y
Sorry about that. Still, it's less important than the actual content.

You keep systematically misspelling it, even after having read my comment. It's annoying.

-18Carinthium10y
0[anonymous]10y
That's why it was in parentheses. Still, you didn't fix it.

Could it be possible that some peoples' intuitions are more deontologist or more consequentialist than others? While trying to answer this, I think I noticed an intuition that being good should make good things happen, and shouldn't make bad things happen. Looking back on the way I thought as a teenager, I think I must have been under that assumption then (when I hadn't heard this sort of ethics discussed explicitly). I'm not sure about further back then that, though, so I don't know that I didn't just hear a lot of consequentialist arguments and get used ... (read more)

0Carinthium10y
No disputes on Paragraph 1, but: An intuition that morality should be "universial" in your sense is not as common as you might think. In Victorian times there was a hierarchy of responsibilities depending on closeness, which fits modern intutions except that race has been removed. Confucius considered it the fillial duty of a son to cover up their father's crimes. Finally, there are general tribal in-group instincts. All these suggest that the intuition morality should be universial (as opposed to logically coherent, which is more common) is the "weaker" intuition that should give. In addition, of course, see Elizier's articles about no universially persuasive argument.
0mare-of-night10y
Right, good point. I had Kant on my mind while I was writing the post, and didn't do the mental search I should have to check other sets of ideas.

While it's possible to express consequentialism in a deontological-sounding form, I don't think this would yield a central example of what people mean by deontological ethics — because part of what is meant by that is a contrast with consequentialism.

I take central deontology to entail something of the form, "There exist some moral duties that are independent of the consequences of the actions that they require or forbid." Or, equivalently, "Some things can be morally required even if they do no benefit, and/or some things can be morally for... (read more)

Deontology is not in general incompatible. You could have a deontology that says :God says do exactly what eliezer yudkowsky thinks is correct. But most people's deontology does not work that way.

Our instincts being reminiscent of deontology is very much not the same thing as deontology being true.

1Carinthium10y
In your metaethics, what does it mean for an ethical system to be "true", then (put in quotations only because it is a vague term at the moment in need of definition)? Elizier's metaethics has a good case for following a morality considered "true" in that it fits human intuitions- but if you abandon that where does it get you?
0drethelin10y
Deontology being in true in my meaning is something along the lines of god actually existing and there being a list of things he wants us to do, or a morality that is somehow inherent in the laws of physics that once we know enough about the universe everyone should follow. To me a morality that falls out of the balance between human (or sentients in general) preferences is more like utilitarianism.
0wedrifid10y
That isn't a deontology. That is an epistemic state. "God says do X" is in the class "Snow is white" not "You should do X". Of course if you add "You should do exactly what God says" then you have a deontology. Well, you would if not for the additional fact "Eliezer Yudkowsky thinks that God saying so isn't a particularly good reason to do it", making the system arguably inconsistent.

As far as I understand Eliezer's metaethics, I would say that it is compatible with deontology. It even presupposes it a little bit, since the psychological unity of mankind can be seen as a very general set of deontologies.
I would agree thus that deontology is what human instincts are based on.

Under my further elaboration on said metaethics, that is the view of morality as common computations + local patches, deontology and consequentialism are not really opposing theories. In the evolution of a species, morality would be formed as common computations tha... (read more)

1Carinthium10y
"Optimal" by what value? Since we don't have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal. This leads to problems. Take a Hypothetical Case A. -In Case A there are several options. One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option's taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run. This is an extreme case, but it shows the problem at it's worst. Elizier would say that doing the consequentialist thing would be the Right thing to do. However, he cannot have any compelling reason to do it based on his reasons for morality- an innate desire to act that way being the only reason he has for it.
0MrMind10y
Well, I intended it in the minimal sense of "maximizing an optimization problem", if the moral quandary could be seen in that way. I was not asserting that consequentialism is the optimal way to find a solution to a moral problem, I stated that it seems to me that consequentialism is the only way to find an optimal solution to a moral problem that our previous morality cannot cover. But we do have an objective morality (in Eliezer's metaethics): it's morality! As far as I can understand, he states that morality is the common human computation to assign values to states of the world around us. I believe that he asserts these two things, besides others: * morality is objective in the sense that it's a common fundamental computation, shared by all humans; * even if we encounter an alien way to assign value to states of the world (e.g. pebblesorters), we could not call that morality, because we cannot go outside of our moral system; we should call it another way, and it would not be morally understandable. That is: human value computation -> morality; pebblesorters value computation -> primality, which is not: moral, fair, just, etc. I agree that a direct conflict between a deontological computation and a consequentalist one cannot be solved normatively by metaethics. At least, not by the one exposed here or the one I ascribe to. However, I believe that it doesn't need to: it's true that morality, if confronted with truly alien value computations like primality or clipping, it's rather monolithic, however, if zoomed in it can be rather confused. I would say that in any situation where there's such a conflict, only the individual computation present in the actor's mind could determine the outcome. If you want, computational metaethics is descriptive and maybe predictive, rather than prescriptive.

I agree; on my reading, the metaethics in the Metaethics sequence are compatible with deontology as well as consequentialism.

You can read Eliezer defending some kind of utilitarianism here. Note that, as is stressed in that post, on Eliezer's view, morality doesn't proceed from intuitions only. Deliberation and reflection are also important.

2Carinthium10y
The problem with what Elizier says there is making it compatible with his reason for being moral. For example: "And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities. It's just that the brain doesn't goddamn multiply. Quantities get thrown out the window." However, Elizier's comments on "The Pebblesorters" amongst others make clear that he defines morality based on what humans feel is moral. How is this compatible? In addition, given that the morality in the Metaethics is fundamentally based on preferences, there are severe problems. Take Hypothetical case A, which is broad enough to cover a lot of plausible scenarios. A- A hypothetical case where there is an option which will be the best from a consequentialist perspective, but which for some reason the person who takes the option would feel overall more guilty for choosing it AND be less happy aftewards than the alternative, both in the short run and the long run. Elizier would say to take the action that is best from a consequentialist perspective. This is indefensible however you look at it- logically, philsophically, etc.
2Nisan10y
Ok, I can see why you read the Pebblesorters parable and concluded that on Eliezer's view, morality comes from human feelings and intuitions alone. The Pebblesorters are not very reflective or deliberative (although there's that one episode where a Pebblesorter makes a persuasive moral argument by demonstrating that a number is composite.) But I think you'll find that it's also compatible with the position that morality comes from human feelings and intuitions, as well as intuitions about how to reconcile conflicting intuitions and intuitions about the role of deliberation in morality. And, since The Moral Void and other posts explicitly say that such metaintuitions are an essential part of the foundation of morality, I think it's safe to say this is what Eliezer meant. I'll set aside your scenario A for now because that seems like the start of a different conversation.
-2Carinthium10y
Elizier doesn't have sufficient justification for including such metaintuitions anyway. Scenario A illustrates this well- assuming reflecting on the issue doesn't change the balance of what a person wants to do anyway, it doesn't make sense and Elizier's consequentialism is the equivalent of the stone tablet.
0Nisan10y
You really ought to learn to spell Eliezer's name. Anyways, it looks like you're no longer asking for clarification of the Metaethics sequence and have switched to critiquing it; I'll let other commenters engage with you on that.

I suspect the real reason why a lot of people around here like consequentialism, is that (despite their claims to the contrary) alieve that ideas should have a Platonic mathematical backing, and the VNM theorem provides just such a backing for consequentialism.

3[anonymous]10y
VNM
0Eugine_Nier10y
Thanks, fixed.
2Vladimir_Nesov10y
(I don't have to like consequentialism to be motivated by considerations it offers, such as relative unimportance of what I happen to like.)
2shminux10y
I don't see any link between Platonism and consequentialism.
0Eugine_Nier10y
Basically, the VNM theorem is sufficiently elegant that it causes people to treat consequentialism as the Platonic form of morality.