Update (27 Oct):

ChristianKi has pointed out a valid flaw / elaboration.

It seems like your argument is that Causal Decision Theory leads to defection on prisoner dilema and you consider causal decision theory as an essential feature of being consequentialist. 

The following post does not refer to any specific decision theory, just averaged out results over people who in the real world call themselves consequentialists. And they may not be followers of TDT or EDT. So it's a much weaker result that it sounds like. I'm happy to modify the post if that is best.


I thought this post was worth making, since I see a lot of people on this site who often prefer the consequentialist side on the consequential versus deontological debate. I respect a lot of them, which is why I will try to tread carefully.

This post is divided into two parts. The first part is philosophical background around moral non-realism and how "survival" is the closest we have to an objective moral or rational determinant. The second part deals with how consequentialism might harm chances of survival.

Philosophical background

Moral non-realism

I will also start by assuming moral non-realism, more specifically non-objectivism. And I mean this in a number of senses. Firstly, that there are no physical contructs that objectively determine right or wrong. There is no god, no elegant physical entity or law that tells me being a serial killer is "wrong" and being who I currently am is "right". Secondly, that there are no logical or mathematical contructs that objectively determine right or wrong. You can't use logical axioms in a vacuum to prove moral assertions. Proving this is outside the scope of the post, but I will assume it.

What you can do is prove statements based on:

 - Survival - "A society full of serial killers is less likely to survive than our current society." This is easy to see as true. Note that this statement doesn't make any remark on whether survival is good. It also doesn't make any remark on whether humans are special in any sense (no assumption of consciousness or souls), it just says humans are more likely to continue their existence in a universe with certain values. Such a statement could just as well be proved on machines or plants or stars - if you can find some physical traits that increase chances of survival.

 - Neurochemical effect - "Meeting a serial killer will increase the heart rate of my body and produce unique neurochemical signatures in my brain." Now we've moved from the perspective of a society to that of a human. Still nothing metaphysical going on, we're just looking at the physical manifestation of emotions and thoughts. Again, these could also be observed in machine, if it produced certain patterns or outputs in response to certain inputs.

 - First-person effect - "Meeting a serial killer will enrage me." Now we have moved to the first-person experience and emotions as they are felt. How an algorithm feels from the inside. Consciousness is a loaded term so I will avoid using it, but this is what is referred to when people refer to conscious experience. I tend to agree that these first-person effects have no physical place in the universe, and can therefore not be used to prove any additional facts about the physical world that you couldn't prove without knowing about it. So I will caution against any proofs of moral realism that refer primarily to this first-person effect, often even without realising. I will caution it to such an extent that I will try framing this entire post without using first-person pronouns. "We use our intelligence to survive." should raise a warning bell in your head. A better statement would mean "Human intelliegence has increased its odds of survival." The latter statement does not talk about "you" personally at all when it talks about survival.

Thinking non-realism

The same also applies to thought processes and rationality. I will split this into two parts, the first is a weak opinion, the second is stronger.

There may not be an objective "rational thought process" embedded into either physical laws or logical laws. And when I say "rational thought process" I am referring to algorithms not axioms. So an axiom like "1=1" or "1+1=2" may still be objective or more fundamental metaphysically. What does not feel objective to me is the notion of a perfect algorithm. There can be a perfect algorithm for a stated problem, but perhaps not a perfect algorithm independent of anything.

We tend to fuzzily define an objective scale for intelligence of agents. I tend to find this too rather anthropocentric, and catering to the specific problems that humans observe in their environments. A machine inside of a chess simulation for instance would never need to know how to learn to light a fire. But as I said, this is a weak opinion. I'm happy to be proven wrong, perhaps Turing machines with fewer states are hard-coded into physical laws and Solomonoff induction gives you an ideal agent.

Which brings me to my second and more important point. Even if there happens to be an ideal rational agent, there is no metaphysical pull that causes humans to become ever closer to such agents. If there was such a pull it should also apply to every object in the universe. Even if "rationality" happens to objective, "valuing being rational" is not. Plants don't value being rational, books and chairs don't value it. (The closest thing we have to such a pull is simply survival, discussed below.)

For humans this simply means that being stupid or irrational is not "wrong" just as being a serial killer is not "wrong" in a metaphysical sense. You can't convince someone to value being rational any more than you can convince someone to stop being a serial killer (atleast as long as this "convincing process" is restricted in referring to metaphysical truths). "I am rational, you should want to be rational too." is the same kind of statement as "I care about fellow humans, you should care about them too." or "I care about consuming food with higher sugar content, you should want to consume such food too."

You can however still prove that beings with certain internal processes are more likely to survive in some environments. All the above three can impact survival.

And you can prove that beings with certain internal processes incite certain neurochemical signals in both themselves and other beings. Again, these (thinking) processes are purely mechanical, and can just as well refer to animals or machines.

Drift towards something

So far we have that there is no objectively ideal morality, nor a force that metaphysically attracts us towards it. And that there may not be an objectively ideal rationality, nor a force that metaphysically attracts us towards it. The closest thing we have to such a thing is evolutionary survival. Beings who have specific patterns of emotions (morality) and specific patterns of thoughts (rationality) are more likely to survive.

Neither of these are constant, humans are eternally drifiting towards various patterns of emotion and thought. From a first-person perspective, we often feel that we are choosing these patterns. "We are choosing to care or not care about certain things." "We can choose to change these over time." "We are choosing to care about being rational." "We are choosing to think in certain ways, or block certain thoughts." But again I will caution against referring to anything inside of this first-person perspective as a source of truth to prove anything outside of it in an "objective" sense. And choice typically only exists inside of this first-person perspective, outside of it we have physical laws, possibly deterministic ones.

Consequentialism might harm survival

This brings me to the second and more important section of this post. Which beings survive? A lot of discourse on this site implicitly assumes that being rational increases odds of survival. And human history has overwhelming shown that human intelligence has increased its own odds of survival. Which I tend to agree with.

The problem comes when people also apply this to moral principles and say moral principles must be rational or consistent to maximise odds of survival. First I will talk about why survival odds may be negatively impacted by a moral system such as consequentialism. Then I will talk about why we tend to make this mistake.

Costs of consequentialism

I will strongly recommend you read Section 1 of this paper.

The costs of being consequentialist: Social inference from instrumental harmand impartial beneficence

Official source: https://www.sciencedirect.com/science/article/pii/S0022103117308181

DOI if you wanna source it from elsewhere: https://doi.org/10.1016/j.jesp.2018.07.004

The simplest version is that deontological beings can be aligned over anything. Prisoner's dilemma? No problem, just use "I will not defect" as a deontological virtue. Both beings will automatically cooperate. Need to elect a government? "Voting is my fundamental duty" and boom you have a population willing to spend time and energy to vote.

A pair of consequentialist beings on the other hand, will find it a lot harder to cooperate on a prisoner's dilemma. A consequentialist will ask a question like "Is my time better utilised by something besides voting?" So you now have an alignment problem that needs to explicitly be solved.

[This alignment problem may be solvable if all individuals have 100% identical goals, and perfect knowledge of each others goals, and high degree of knowledge about the actual environment they are. But our real environment does not generate agents whose goal content is 100% identical, it allows for mutations. And this ability to mutate has historically helped survival, starting with creating a homo sapiens in the first place.]

As an individual defecting from a societal norm, consequentialism may increase the chances of the individual surviving. But if consequentalism becomes a societal norm, it may reduce the chances of the collective surviving. If this is true, then societies composed of consequentalists will die faster than societies composed of deontologists. And evolution as a force typically acts on collectives, not individuals.

Section 1 of the paper has a lot of good examples on how consequentalists are more likely to break laws, violate others' rights and so on - when the consequences justify it. It also talks about and references a number of older papers on how such agents find it harder to form both social and business relationships, to generate trust to solve any coordination problems.

Why do we assume (wrongly) moral principles must be rational or consistent?

Deontological preferences are often not consistent. And when I say that I mean they may admit circular preferences.

The first mistake that highly rational people tend to make is assuming that rationality is valuable "just because". Which is true, but then morals are also valuable "just because" - one does not supercede the other. If at all anything does supercede, it is due to survival. And that would probably be the emotions. Animals cared about not harming fellow individuals long before they cared about being rational.

The second mistake is because optimisation of this nature happens outside of our field of view. When we rationally think about something, and take actions that increase our chances of survival, we experience this in first-person. We feel we have "choice", or atleast an illusion of it. When a society or species optimises for deontological individuals, it happens at a level outside of this individual first-person perspective, because it is optimising for the collective. And our first-person perspective is a lot more strongly rooted into our own individual, as opposed to collective. We don't feel a "choice" or even an illusion of "choice" when it comes to shaping morals norms across generations of humans, or norms of emotions across millenia of evolution. Because we as individuals are not choosing, if at all anything is choosing it is the collective as a group intelligence that is choosing.

Will all this always hold true?

Not necessarily. Humans might very well be reaching the point where evolution is no longer the dominating factor for our survival. There aren't thousands of human-like species for the inferior ones to die out. We know of only one human-like collective in existence. And this collective is already far superior to every other entity that we can observe as being driven by evolution. (Technically stars and galaxies are driven by survival too, but we don't see anywhere as much mutation or self-improvement.)

We may also reach the point where we can in fact install identical goal content into multiple beings, possibly as digital minds. This will solve coordination problems without deontological drives. Or we may simply fail to solve coordination problems and die out. Or maybe the random mutations that cause humans to be capable of suicide will win out, in some environment.

But I do think it's important to realise why we have "don't murder your rich neighbour to feed poor kids" instinct. And perhaps we should think more carefully before hammering any instinct out with another instinct.

So if you ask me - is this post descriptive or prescriptive? It's mostly descriptive. I'm not saying you should become deontologists because of this post, or that I want you to. If at all I want you to do anything, it's just think about this and hopefully iterate to form consensus with me on a way of looking at the world, outside of any specific moral framework.

2

28 comments, sorted by Highlighting new comments since Today at 5:16 AM
New Comment

Consequentialism might harm survival

In general, the correctness of [a principle] is one matter; the correctness of accepting it, quite another. I think you conflate the claims consequentialism is true and naive consequentialist decision procedures are optimal. Even if we have decisive epistemic reason to accept consequentialism (of some sort), we may have decisive moral or prudential reason to use non-consequentialist decision procedures. So I would at least narrow your claims to consequentialist decision procedures.

evolution as a force typically acts on collectives, not individuals.

I'm not sure what you're asserting here or how it's relevant. Can you be more specific?

"Accept" implies choice. I'm making observations about chances of survival without assuming a notion of choice or free will. The outside perspective, where we are purely governed by physical laws.

I have addressed both the claims though. From the outside perspective, I state:
"consequentialism is true in an objective metaphysical sense" is almost certainly false, moral non-realism
"naive consequentalist decision procedures have highest odds of survival" may be untrue, and some evidence is in the post

--

The evolutionary point I'll give an example. Consider a trait (X) that makes individuals willing to mass-murder others if their own survival is threatened. These could be humans or plants or whatever. Consider any society - be it all X, all not-X or some X some not-X. If you (with god-like ability) wanted to insert one individual into this society with higher odds of survival, you would insert an individual with X and not not-X. X increases individual chances of survival and individual lifepsan, cause you're willing to kill others to save yourself.

Now consider the three societies themselves. All not-X society has highest chances of surviving over large number of generations. X reduces collective chances of survival. Cause you never lose 10 people to one murderer, and other coordination problems can be solved etc etc.

In the real world, over sufficiently many generations, its collective survival that is optimised for by nature. So you'll find more species full of not-X.

Edited again :p

I think I'd phrase the key insight I see in "consequentialism might harm survival" different: consequentialism is computationally expensive, and sometimes you don't have the resources to produce the desired outcome because you don't have the time, energy, or ability to work out all the details. Thus, short-circuited consequential can produce worse results than other moral philosophies.

That being said, fully executed consequentialism can deal with circumstances other approaches might have a harder time with. For example, deontology works well if the rules match the environment you're operating in. Drop into a new environment at the rules might no longer be well adapted to produce good outcomes. Similarly for virtue ethics: what's virtuous and produces good outcomes might be different in different contexts, and so may more struggle to adapt in consequentialism.

In all cases it seems to be a matter of when the moral calculations were performed. In consequentialism they happen just in time, and so we may fail to do enough of them to generate good results. In others, we do them ahead of time, which means we may have computed the right answer for the wrong situation and not have a good way to generate something better quickly because the mechanism of determining rules or virtues happens over decades or centuries of cultural evolution.

Computational expense is a valid point. System 1 has stronger emotions attached so it is more potent at driving behaviour, but it also has less time to calculate. System 2 is slower. Genetic distance is also a point that is just coming to me - some systems may be easier to mutate to from their evolutionary ancestors. Coordination (as described in the post) would be the third point. (Which I also still think is important btw.)

You're right that a specific set of deontological rules may not adapt well to all environments. But tbh a lot of rules are socially overlayed on top of more primitive emotions. What we inherit from birth is "strong feelings of affection towards people who have raised you", what someone may acquire after birth is "it is my moral duty to sign up for the army". This makes us more adaptable, cause emotions can take hundreds of generations to change (genetically), social environments can change much more rapidly. But if the underlying emotions themselves are bad for a given environment, then you're right.

As an individual defecting from a societal norm, consequentialism may increase the chances of the individual surviving. But if consequentalism becomes a societal norm, it may reduce the chances of the collective surviving.

Wait, what? You're saying that all the individuals survived, but the collective didn't? That seems to be saying that a particular organizational configuration ceased to exist, but not that everybody died. The phrasing here is ambiguous

If this is true, then societies composed of consequentalists will die faster than societies composed of deontologists. And evolution as a force typically acts on collectives, not individuals.

This just seems confused. Evolution acts on individuals, unless you're talking about the force of evolution operating (again) on organizational configuration rather than genetics. But societies in such cases often "evolve" by changing rules and structures, not always by collapsing and being replaced.

Section 1 of the paper has a lot of good examples on how consequentalists are more likely to break laws, violate others' rights and so on - when the consequences justify it. It also talks about and references a number of older papers on how such agents find it harder to form both social and business relationships, to generate trust to solve any coordination problems.

This sounds like naive consequentialism, not LessWrong-style consequentialism. A proper consequentialist decision theory takes into account long-term effects of making certain types of choices, not just the short-term effects of individual choices.

(That is, a proper consequentialist foresees that being the kind of person who breaks agreements for short-term benefits has long-term negative consequences, and so they don't do that.)

re: first quote, Individual surviving is measured for their lifespan. Collectiv surviving is measured across multiple generations. Perhaps the example in the second half of this comment will explain it better. (My reply to Zach Stein-Perlman, I'm not sure if the hyperlink worked properly)

re: second quote, I mean that evolution selects for those traits that ensure collective survival. A trait where "one person is willing to kill 10 others to ensure their own survival" will be less selected for compared to one where "one person is willing to die to save someone else".

re: third quote, the papers being referred to don't distinguish between consequentialism that ignores long-term effects and consequalists that do. Some of the studies don't even involve real situations, they're purely hypothetical. Suppose your friend asks you a situation, purely as hypothetical, as to whether you would murder someone to save two others. You simply answering this question indicating you're willing to murder reduces trust with your friend. Now maybe LessWrong-style consequentialism requires you to lie to your friend, that hasn't been studied. (I'm a bit skeptical cause lying to people has second-order effects in your mind too, but that's another discussion)

So yeah, it's an averaged out result across consequentialists, not a statement of the form "every consequentialist has lower survival odds". And it's definitely not prescripive, just becauses survival odds may be lower doesn't mean you "should" be less consequentialist or that you even get a choice in the matter.

re: second quote, I mean that evolution selects for those traits that ensure collective survival

It really, really doesn't. It selects for the proliferation of genes that proliferate, which is very, very different.

A trait where "one person is willing to kill 10 others to ensure their own survival" will be less selected for compared to one where "one person is willing to die to save someone else".

No, it selects for "one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation". If there were no correlation between the trait and relatedness, the trait would be extinguished.

(And the being willing to kill 10 others isn't deselected for either, so long as the others are strangers or rivals for resources, mates, etc.)

Selection works on relative frequency of genes, not on groups or individuals. To the extent that we have any sort of group feeling or behaviors at all, this is due to commonality of genes. A gene won't be universal in a population unless it provides its carriers with some sort of advantage over non-carriers. If there's no individual advantage (or at least gene-specific advantage), it won't become universal.

Suppose your friend asks you a situation, purely as hypothetical, as to whether you would murder someone to save two others. You simply answering this question indicating you're willing to murder reduces trust with your friend.

This sounds less like "consequentialism reduces trust" than "willingness to murder reduces trust" or perhaps "utilitarianism reduces trust".

Now maybe LessWrong-style consequentialism requires you to lie to your friend, that hasn't been studied.

I would expect a LW-style consequentialist to reject such a simple framework as "kill one person to save two" without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.

Really, I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.

It really, really doesn't. It selects for the proliferation of genes that proliferate, which is very, very different.

This is a good point, but I'm not sure I understand the implications. More specifically:

it selects for "one person is willing to die to save someone who is a sufficiently close relation, especially of the next generation"

Doesn't "sufficiently close relation" also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.

I would expect a LW-style consequentialist to reject such a simple framework as "kill one person to save two" without first requiring an awful lot of Least Convenient World stipulations to rule out alternatives, and/or to prefer to let two people die in the short run rather than establish certain horrible precedents or perverse incentives in the long run, reject the whole thing as a false dichotomy, etc. etc.

I would be interested in knowing the Least Convenient World stipulations, and what this phrase means. Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.

I didn't get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.

I find it hard to imagine a rational consequentialist simply taking the scenario at face value and agreeing to straight-up murder even in a fairly hypothetical discussion.

I agree but that in my mind seems like a lot like - their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they're finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I'd be suspicious of whether the justifcation came after the feeling or action, .... or before.)

Doesn't "sufficiently close relation" also apply with some strength to any being of the same species? Consider a species A is splitting into two subspecies A1 and A2. This could be due to members of A1 preferring to save other members of A1. Once A2 dies, A1 retains the trait of wanting to save other members of A1.

Only after the gene is already essentially universal in the general population. When a gene with altruistic inclinations first appears, it will only increase its propagation by favoring others with the same gene. Otherwise, self-sacrifice will more likely extinguish the gene than spread it.

I would be interested in knowing the Least Convenient World stipulations, and what this phrase means.

See The Least Convenient Possible World for where the term was introduced.

Precedents and perverse incentives can be ruled out by assuming none exist, right? Assume in the hypothetical that nobody will ever get to know what choice you made after you made it.

But answering the question means that somebody will know: whoever is asking the question and anyone present to hear the answer. And since it's a hypothetical, the most relevant incentives and consequences are those for the social situation.

I didn't get how a hypothetical with two clear choices could be a false dichotomy. Assume that refusing to choose results in something far worse than either choice.

Far worse for whom? In what way? Consequentialism isn't utilitarianism. If you're taking a utilitarian position of greatest good for greatest number, then the choice is obvious. But consequentialism isn't utilitarianism: you can choose what's best for you, personally, and what's best for me depends heavily on the details.

I agree but that in my mind seems like a lot like - their feelings and values are wired deontologically, their rational brain (incorrectly) thinks they are consequentialists, and they're finding justifications for their thoughts. Unless ofcourse they find a really good justification. (And even if they did find one, I'd be suspicious of whether the justifcation came after the feeling or action, .... or before.)

But that's you projecting your own experience onto somebody else, aka the Typical Mind Fallacy.

My experience of being asked a utilitarian hypothetical is, "what am I going to get out of answering this stupid hypothetical?" And mostly the answer is, "nothing good". So I'm going to attack the premise right away. It's got zero to do with killing or not killing: my answer to the generalized question of "is it ever a good thing to kill somebody to save somebody else" is sure, of course, and that can be true even at 1:1 trade of lives.

Hell, it can be a good thing to kill somebody even if it's not saving any lives. The more important ethical question in my mind is consent, because it's a hell of a lot harder to construct a justification to kill somebody without their consent, and my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what's actually going on.

And even then, that's not the same as saying that I would personally ever consent to killing someone, whatever the justification. But that's not because I have a deontological rule saying "never do that", but because I'm reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse. For example, if the two people I'm saving are myself and my wife and the person being killed is somebody attacking us, then I'm much less likely to have an issue with using lethal force.

Based on a glance at the paper you referenced, though, I'm going to say that the authors incorrectly conflated consequentialism and utilitarianism. You can be a consequentialist without being a utilitarian, and even there I'm not 100% sure you can't have a consistent utilitarian position based on utility as seen by you, as opposed to an impartial interpretation of utility.

At the very least, what the paper is specifically saying is that people don't like impartial beneficence. That is, we want to be friends with people who will treat their friends better than everybody else. This is natural and also pretty darn obvious... and has zero to do with consequentialism as discussed on LW, where consequentialism refers to an individual agent's utility function, and it's perfectly valid for an individual's utility function to privilege friends and family.

Only after the gene is already essentially universal in the general population. 

Makes sense

See The Least Convenient Possible World for where the term was introduced.

This is a great article, just read. Thanks.

And since it's a hypothetical, the most relevant incentives and consequences are those for the social situation.

True. 

my priors suggest that any situation that seems to be generating such a justification is more likely to be an illusion or false dichotomy, that needs more time spent on figuring out what's actually going on.

What if I apply LCPW and tell you that there's no illusion going on, no false dichotomy?

But that's not because I have a deontological rule saying "never do that", but because I'm reasonably certain that no real good can ever come of that, without some personal benefit, like saving my own life or that of my spouse.

I didn't understand this at all. Can't you assume LCPW as hypothetical?

You're completely right about the paper.

Can't you assume LCPW as hypothetical?

The question isn't can't I, but why should I? The LCPW is a tool for strengthening an argument against something, it's not something that requires a person to accept or answer arbitrary hypotheticals.

As noted at the end of the article, the recommendation is to separate rejecting the entire argument vs accepting the argument contingent on an inconvenient fact. In this particular case, I categorically reject the argument that trolley problems should be answered in a utilitarian way, because I am not a utilitarian.

Got it. So you're not utilitarian and you're not against murder due to a deontological rule. How would you describe your own ethics then? Do they match any existing school of thought? (It's a bit off-topic so feel free to end convo here if you like)

The original post was about utilitarianism and consequentialism though, so LCPW should be relevant.

Some of the studies don't even involve real situations, they're purely hypothetical. 

Studies that are not about real situations by their nature are not good for thinking about real world impacts of ideas. It's hard enough to get studies that use real situations to replicate in a meaningful way in psychology. There's no intellectual basis for thinking that you can reliably expolate from studies about hypothetical situations like that to real world behavior.

A philsopher is the kind of person who can switch from a very skeptic position like being unsure whether chairs really exist to believing that he can extrapolate hypnothetical data to make predictions about complex real world interactions in remarkable speed.

First para makes sense. Helpful feedback, thanks!

I didn't understand what you're trying to say in second para.

The simplest version is that deontological beings can be aligned over anything. Prisoner’s dilemma? No problem, just use “I will not defect” as a deontological virtue. Both beings will automatically cooperate.

But why that rule, not another? It's a moral rule because it leads to desirable consequences. So deontology isn't sharply distinct from consequentialism. But it can still have advantages over altruistic consequentialism because it allows agents to cooperate even if they are out of contact .

A lot of discourse on this site implicitly assumes that being rational increases odds of survival.

Individual or group survival? If you refuse to fight in a war to defend your your community, that's good for your survival , but bad for your community's survival. Individual and group values are different, which is why morality is different from rationality.

And altruism versus selfishness is the real crux. You tip the scales against consequentualisn by treating it as selfish consequentialism. Altruistic consequentialism is very different to selfish consequentialism, but not very different to deontology.

Agreed with first two paras. 

But why that rule, not another?

I'd also say it's due to genetic distance - which moral values are easier to mutate in.

re: third para, the claims from the papers go deeper than just that selfishness reduces odds of survival. In a world full of deontologists, an individual who tells a friend or business partner, "I will murder one innocent person to save five", is going to have more difficulty forming the same level of trust with them, as compared to an individual who is also deontological.

P.S. Maybe a "real" consequentialist will also lie to their friend about what they will do, that will lengthen the discussion though.

the claims from the papers go deeper than just that selfishness reduces odds of survival

Odds of who's survival?

Two claims I think:

  1. Odds of individual consequentialist in a world of deontologists (current world) less than odds of individual deontologoist in the same world. Odds measured in terms of lifespan of the person.
  2. Odds of a society full of consequentialists less than odds of society full of deontologists. Odds measured in terms of number of generations the society survives.

It seems like your argument is that Causal Decision Theory leads to defection on prisoner dilema and you consider causal decision theory as an essential feature of being consequentialist. 

The sequences advocate Timeless Decision Theory and later Functional Decision Theory was proposed to solve those problems. If you want to convince people on LessWrong that consquequentialism is flawed you likely need to make arguments that don't just work against Causal Decision Theory but also Timeless Decision Theory and Functional Decision Theory.

P.P.S. I have edited the post and directly quoted your response, please let me know if that is not okay.

Thanks for your response. That's exactly what I'm realising too, thanks for framing it!

TDT feels to me (weakly) like it exploits perfect clone + perfect knowledge of the fact you're facing a perfect clone. Is there any post on how TDT fares in situations where you're 99% same and have 99% assured knowledge that you're 99% same? (Closer to real world situations)

P.S. I'd prefer not to use the framing "is flawed" if possible since it implies a notion of either choice/free will, or one of metaphysical truths (not sure which one you mean). Rather than just the outside perspective on who survives and who doesn't.

I feel you are taking some concepts that you think aren't very well defined, and throwing them away, replacing them with nothing. 

I admit that the intuitive notions of "morality" are not fully rigorous, but they are still far from total gibberish. Some smart philosopher may come along and find a good formal definition.

"Survival" is the closest we have to an objective moral or rational determinant. 

Whether or not a human survives is an objective question. The amount of hair they have is similarly objective. So is the amount of laughing they have done, or the amount of mathematical facts they know. 

All of these have ambiguity of definition, has a braindead body with a beating heart "survived"? This is a question of how you define "survive". And once you define that, its objective.

There is nothing special about survival, except to the extent that some part of ourselves already cares about it.

 And evolution as a force typically acts on collectives, not individuals.

Evolution doesn't effect any individual in particular. There is no individual moth who evolved to be dark. It acts on the population of moths as a whole. But evolution selects for the individuals that put themselves ahead. Often this means individuals that cheat to benefit themselves at the expense of the species. (Cooperative behaviour is favoured when creatures have long memories and a good reputation is a big survival advantage. Stab your hunting partner in the back to double your share once, and no one will ever hunt with you again.)

Some smart philosopher may come along and find a good formal definition.

I mean yes, nothing is ever set in stone, we might come face-to-face with god tomorrow and that'll change everything we currently believe. But with available information I still think it's reasonable to say god as we nebulously define it does not exist with very high probability. Same goes for objective morality.

I personally don't think formalisation is that important when it comes to knowing the facts here. What would convince me to change my mind would be empirical evidence of moral drivers outside of the brains of individual humans. Or perhaps some violation of physical laws - which could be evidence that a singular god is driving the world. But I certainly will not stop anyone from trying to formalise, or use that to increase / descrease their conviction.

But evolution selects for the individuals that put themselves ahead. Often this means individuals that cheat to benefit themselves at the expense of the species.

I'd be very interested in examples of this. Cooperative behaviour is favoured mammals onwards for sure. Even bacteria choose to transmit and share information via plasmids, and they don't have reputations or long memories. So I'd be a little surprised if this is common.

I've read a statement that goes something like "obviously utilitarianism is the correct moral rule, but deontology does the greatest good for the greatest number". I may be misremembering the exact statement, but this post reminds me quite a bit of that.

Eliezer said something similar:

The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

Deontology with correct rules is indistiguishable from Consequentialism with aligned goals and perfect information.  The same actions will be chosen via each method

What exactly does 'correct' mean here?

Presumably, optimisation of consequences. But the catch is that they would need to be infinitely complex to match a consequentualist calculation of unlimited complexity