There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 


What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).


The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards


The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 


Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 


Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 



Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

New Comment
476 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I agree that species membership as such is irrelevant, although it is in practice an extremely powerful summary piece of information about a creature's capabilities, psychology, relationship with moral agents, ability to contribute to society, responsiveness in productivity to expected future conditions, etc.

Animal happiness is good, and animal pain is bad. However, the word anti-speciesism, and some of your discussion, suggests treating experience as binary and ignoring quantitative differences, e.g. here:

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments.

This leaves out the idea of the quantity of experience. In human split-brain patients the hemispheres can experience and act quite independently without common knowledge or communication. Unless you think that the quantity of happiness or suffering doubles when the corpus callosum is cut, then happiness and pain can occur in substructures of brains, not just whole brains. And if intensive communication and coordination were enough to diminish moral value why does this not apply... (read more)

I fully agree with this point you make, I should have mentioned this. I think "probabilistic discounting" should refer to both "probability of being sentient" and "intensity of experiences given sentient". I'm not convinced that (relative) brain size makes a difference in this regard, but I certainly wouldn't rule it out, so this indeed factors in probabilistically and I don't consider this to be speciesist.

Note that by this measure, ants are six times more important than humans. But to address your question: "specieism" is not a label that's slapped on people who disagree with you. It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are, and this bias prevents them from updating their beliefs in light of new evidence." Brain-to-body quotient is one type of evidence we should consider, but it's not a great one. The encephalization quotient improves on it slightly by considering the non-linearity of body size, but there are many other metrics which are probably more relevant.
You linked to a page comparing brain-to-body-weight ratios, rather than any absolute features of the brain, and referring not to ants in general but to unusually miniaturized ants in which the rest of the body is shrunken. That seems pretty irrelevant. I was using total brain mass and neuron count, not brain-to-body-mass. I agree these are relevant evidence about quality of experience, and whether to attribute experience at all. But I would say that quality and quantity of experience are distinguishable (although the absence of experience implies quantity 0).
This statement implies that humans can be more or less special "actually", as if it were a matter of fact, of objective reality. That is not true, however. Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it. Your point is equivalent to saying "many people have a cognitive bias that roses are more 'pretty' than they actually are".
As mentioned in the original post, the same can be said of race: I may subjectively prefer white people. You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.
Yes. That's perfectly fine. In fact, if you examine the revealed preferences (e.g. who people prefer to have as their neighbours or who do they prefer to marry) you will see that most people in reality do prefer others of their own race. And, of course, the same can be said of sex, too. Unless you are an evenhanded bi, you're most certainly guilty of preferring some specific sex (or maybe gender, it varies). "Morally acceptable" is a judgement, it is conditional on which morality you're using as your standard. Different moralities will produce different moral acceptability for the same actions. Perhaps you wanted to say "socially acceptable"? In particular, "socially acceptable in contemporary US"? That, of course, is a very different thing. Sigh. This is a rationality forum, no? And you're using emotionally charged guilt-by-association arguments? (it's actually designed guilt-by-association since the word "speciesism" was explicitly coined to resemble "racism", etc.). Warning: HERE BE MIND-KILLERS!
Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)
I'm fairly sure it's for the examples referencing the politically charged issues of racism and sexism.
It can be levelled at most people who use employ either of those terms.
I apologize for presenting the argument in a way that's difficult to understand. Here are the facts: 1. If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable 2. We* don't believe that sexism, racism, etc. are acceptable 3. Therefore, we cannot accept arguments based on subjective opinions Is there a better way to phrase this? (* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)
5Said Achmiz
Y'got some... logical problems going on, there. Firstly, your (1), while true, is misleading; it should read "If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that [long, LONG, probably literally infinite list of possible views, of which sexism and racism may be members but which contains innumerably more other stuff] are morally acceptable". Sure, accepting beliefs without evidence may lead us to sexism and/or racism, but that's hardly our biggest problem at that point. Secondly, you presuppose that sexism and racism are necessarily not based on evidence. Of course, you may say that sexism and racism are by definition not based on evidence, because if there's evidence, then it's not sexist/racist, but that would be one of those "37 Ways That Bad Stuff Can Happen" or what have you; most people, after all, do not use your definition of "sexist" or "racist"; the common definition takes no notice of whether there's evidence or not. Thirdly, for every modus ponens there is a modus tollens — and, as in this case, vice versa: we could decide that "subjective" opinions not based on evidence are morally acceptable (after all, we're not talking about empirical matters, right? These are moral positions). This, by your (1) and modus ponens, would lead us to accept sexism and racism. Intended? Or no? Finally — and this is the big one — it strikes me as fundamentally backwards to start from broad moral positions, and reason from them to a decision about whether we need evidence for our moral positions.
There's a bigger logical flaw: "belief that subjective opinions not based on evidence are acceptable" is an ambiguous English phrase. It can mean belief that: 1) if X is a subjective opinion, then X is acceptable. 2) there exists at least one X such that X is a subjective opinion and is acceptable Needless to say, the argument depends on it being #1, while most people who would say such a thing would mean #2. I believe that hairdryers are for sale at Wal-Mart. That doesn't mean that every hairdryer in existence is for sale at Wal-Mart.
2Said Achmiz
Yes, good point — the "some" vs. "all" distinction is being ignored.
Good point, thank you. I have tried again here.
Thank you Said for your helpful comments. How is this: 1. Suppose we are considering whether being A is more morally valuable than being B. If we don't require evidence when making that decision, then lots of ridiculous conclusions are possible, including racism and sexism. 2. We don't want these ridiculous conclusions. 3. Therefore, when judging the moral worth of beings, the differentiation must be based on evidence. Regarding your "Finally" point - I was responding to Lumifer's statement: I agree that most people wouldn't take this position, so my argument is usually more confusing than helpful. But in this case it seemed relevant.
This has the same flaw as before, just phrased a little differently. "Suppose I am ordering a pizza. If we don't require it to be square, then all sorts of ridiculous possibilities are possible, such as a pizza a half inch wide and 20 feet long. We don't want these ridiculous possibilities, so we better make sure to always order square pizzas." "If we don't require evidence, then ridiculous conclusions are possible" can be interpreted in English to mean 1) In any case where we don't require evidence, ridiculous conclusions are possible. 2) In at least one case where we don't require evidence, ridiculous conclusions are possible. Most people who think that the statement is true would be agreeing with it in sense #2, just like with the pizzas. And your argument depends on sense #1. In other words, you're assuming that if evidence isn't used to rule out racism, then nothing else can rule out racism either.
Fair enough. What if we replace (1) with 1. If we allow subjective opinions, then ridiculous conclusions are possible. Keep in mind that I was responding to Lumifer's comment: This is not intended to be a grand, sweeping axiom of ethics. I was just pointing out that allowing these subjective opinions proves more than we probably want.
That still has the same flaw. If we allow any and all subjective opinions, then ridiculous conclusions are possible. But it doesn't follow that if we allow some subjective opinions, ridiculous conclusions are possible. And nobody's claiming the former.
The issue isn't whether you require evidence. The issue is solely which moral yardstick are you using. The "evidence" is the application of that particular moral metric to beings A and B, but it seems to me you're should be more concerned with the metric itself. To give a crude and trivial example, if the metric is "Long noses are better than short noses" then the evidence is length of noses of A and B and on the basis of this evidence we declare the long-nose being A to be more valuable (conditional on this metric, of course) than the short-nose being B. I don't think you'll be happy with this outcome :-) Oh, and you are still starting with the predefined conclusion and then looking for ways to support it.
By the way, thank you for spelling out your position with a clear, valid argument that keeps the conversation moving forward. In the heat of argument we often forget to express our appreciation of well-posed comments.
This is not a core belief of the broader LW community. An actual core belief of the LW community:
I'm not sure that is quite true. It is controversial and many are not comfortable with it without caveats.
You keep using that word. I do not think it means what you think it means. That's curious. My and your ideas of morality are radically different. There's even not that much of a common base. Let me start by re-expressing in my words how do I read your position (so that you could fix my misinterpretations). First, you're using "morally acceptable" without any qualifiers of conditionals. This means that you believe there is One True Morality, the Correct One, on the basis of which we can and should judge actions and opinions. Given your emphasis on "evidence", you also seem to believe that this One True Morality is objective, that is, can be derived from actual reality and proven by facts. Second, you divide subjective opinions into two classes: "not based on evidence" and, presumably, "based on evidence". Note that this is not at all the same thing as "falsifiable" vs. "non-falsifiable". For example, let's say I try two kinds of wine and declare that I like the second wine better. Is such a subjective opinion "based on evidence"? You also have major logic problems here (starting with the all/some issue), but it's a mess and I think other comments have addressed it. To contrast, I'll give a brief outline of how I view morality. I think of morality as a more or less coherent set of values at the core of which is a subset of moral axioms. These moral axioms are certainly not arbitrary -- many factors influence them, the three biggest ones are probably biology, societal/cultural influence, and individual upbringing and history -- but they are not falsifiable. You cannot prove them right or wrong. Evidence certainly matters, but it matters mostly at the interface of moral values and actions: evidence tells you whether the actual outcomes of your actions match your intent and your values. It is, of course, often the case that they do not. However evidence cannot tell you what you should want or what you should value. Heh. I neither believe you have the power to spea
This does not follow. (It can be repaired by adding an "all" to the antecedent but then then the conclusion in '3' would not follow from 1 and 2.) Basically, no. Your argument is irredeemably flawed.
This does not follow.
The local explanation of this concept is the 2-place word, which I rather like.
Well yes, yes it does. Even if "specialness" is defined purely within human neurology doesn't mean you can't apply it's criteria to parts of reality and be objectively right or wrong about the result - just like, say, numbers. Now, you could argue that humans vary with regards to how "special" humanity is to them, I suppose ... but in practice we seem to have a common cause, generally. Alternately, you could complain that paperclippers disagree about our "specialness" (or rather mean something different by the term, since their specialness algorithm returns high values for paperclips and low ones for humans and rocks), and is therefore insufficiently objective, but ...
I disagree. Here is the relevant difference: if you're using "special" unconditionally, you're only expressing a fuzzy opinion which is just that, an opinion. To get to the level of facts you need to make your "special" conditional on some specific standard or metric and thus convert it into a measurement. It's still the same as saying that prettiness of roses is objective. Unconditionally, it's not. But if you want to, you can define 'prettiness' sufficiently precisely to make it a measurement and then you can objectively talk about prettiness of roses.
Indeed. The difference being that humans don't all have the same prettiness-metrics, which is why the comparison fails.
Humans all have the same specialness metrics?? I don't think so.
Well, obviously some of them are biased in different directions ... but yeah, it looks to me like CEV coheres. EDIT: Unless I've completely misunderstood you somehow. Far from impossible.
Brain size or number of neurons might work within a general group such as "mammals", however for example birds seem to be significantly smarter in some sense than a mammal of equivalently-sized brain, probably accounting for some difference in underlying architecture.
Do you have a specific bird and mammal in mind? Brain mass grows with body mass. It's so noisy that people can't decide whether it is the 2/3 or 3/4 power of body mass.* It is said that a mouse is as smart as a cow. What the cow is doing with all that gray matter, I don't know. Smart animals, like apes, dolphins, and ravens have bigger brains than the trend line, but the deviation is small, so they have smaller brains than larger animals. From this point of view, saying that birds are smart for their brain size is just saying that they are small. * probably the right answer is 3/4 and 2/3 is just promoted by people who found 3/4 inexplicable, but Geoffrey West says that denominators of 4 are OK.
Well yea. Although i guess mammals tend to have bigger brain relative their bodies so you'd still expect the opposite?
Some of the relevant differences to look at are energy consumption, synapses, relative emphasis on different brain regions, selective pressure on different functions, sensory vs cognitive processing, neuron and nerve size (which affects speed and energy use), speed/firing rates. I'm just introducing the basic point here. Also see my other point about the distinction between intelligence and experience.
I think there's a link not showing due to broken formatting.
How small a subsystem can experience pleasure or pain? If we developed configurations specifically for this purpose and sacrificed all the other things you normally want out of a brain we could likely get far more sentience per gram of neurons than you get with any existing brain. If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad? EDIT: expanded this into a top level post.
I don't think that we should be confident that such things are all that matter (indeed, I think that's not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot). I would answer "yes" and "yes," especially in expected value terms.
Isn't it better to consider brain-to-body mass ratios? A lion isn't 1.5 orders of magnitude smarter than a housecat. I wouldn't assume that quantity of experience is linear in the number of neurons.
Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns. But if we're talking about reinforcement learning and sensory experience in themselves, we're not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering i... (read more)

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering. As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal. Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front. So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".
I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat. You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.
By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses. Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.
As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior. Let's enumerate the costs, rather than just saying "there are costs." * Money wise, you save or break even. * It has no time cost in much of the US (most restaurants have vegetarian options). * The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive). * The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing. Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.
3Said Achmiz
This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money. Do you eat at restaurants all the time? Learning how to cook the new things you're now eating instead of meat is a time cost. Also, there are costs you don't mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.
And it would be correct to deny that a change that would possibly be made to one's behavior is "such a cheap change" that we don't need to weigh the cost of the change very much. That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it's not all that calculable.
I wasn't able to glean this from your other article either, so I apologize if you've said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don't care about their suffering? (And in either case, why?)
I think suffering is qualitatively different when it's accompanied by some combination I don't fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering is morally relevant.

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

"Accompanied" can also mean "reflected upon after the fact". I agree with your last sentence though.

How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn't you at least attribute some amount of concern for sentient beings that lack self-awareness?

I second this. Really not sure what justifies such confidence.
It strikes me that the only "disagreement" you have with the OP is that your reasoning isn't completely spelled out. If you said, for example, "I don't believe pigs' suffering matters as much because they don't show long-term behavior modifications as a result of painful stimuli" that wouldn't be a speciesist remark. (It might be factually wrong, though.)
There's missing something at the end, like "... is morally relevant", right?
Fixed; thanks!
How do you avoid it being kosher to kill you when you're asleep - and thus unable to perform at your usual level of consciousness - if you don't endorse some version of the potential principle? If you were to sleep and never wake, then it wouldn't necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.
Killing me when I'm asleep is wrong for the same reason as killing me instantly and painlessly when I'm awake is wrong. Both ways I don't get to continue living this life that I enjoy. (I'm not as anti-death as some people here.)
So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you'd approve of me just killing you. I mean ideally you'd probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist - it's even in the text you linked So, bearing in mind that you don't always seem to be performing at your normal level of thought - e.g. when you're asleep - how do you bind that principle so that it applies to you and not infants?
I don't think you should kill infants either, again for the "effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased" logic.
How do you reconcile that with:
The "as long as the people are ok with it" deals with the "effect it has on those that remain". The "removes the possibility for future joy on the part of the deceased" remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.
Granted, but do you really think that they're going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies? To link that back to the marginal cases argument, which I believe - correct me if I'm wrong - you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point - may be wrong - you'd admit isn't nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?
The value a society might get from it isn't limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you're right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that. The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.
Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They're not as smart as humans but it's not like they've got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard - what's your criteria for being smart enough to be morally considerable, and why?
If you're considering opening a baby farm, not opening the baby farm doesn't mean the babies get to live fulfilling lives: it means they don't get to exist, so that point is moot.
If you view human potential as valuable then you end up saying something like that people should maximise that via breeding up to whatever the resource boundary is for meaningful human life. Unless that is implicitly bound - which I think to be a reasonable assumption to make for most people's likely world views.
Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination? What if you were killed immediately afterwards, so long term memories wouldn't come into play?
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense. If you offered me the choice between: A) 50% chance you are tortured and then released, 50% chance you are killed immediately B) 50% chance you are tortured and then killed, 50% chance you are released immediately I would strongly prefer B. Is that what you're asking?
If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms? I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.
I'm sorry, I'm confused. Which two situations? I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.
A) Being tortured as you are now B) Having your IQ and cognitive abilities lowered then being tortured. EDIT: I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory. Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.
Strong preference for (B), having my cognitive abilities lowered to the point that there's no longer anyone there to experience the torture.
Those are not the same thing. They're not even remotely similar beyond both involving brain surgery. Me too, but I never could persuade the people arguing for it of this fact :(
Agreed. I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first. Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.
Ah, OK. Hah, yes. Sorry, I thought you were complaining it was actually a strawman :/ Whoops.

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.


Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness.

I don't think that's a "but on the other hand;" I think that's a "it is a good way to raise awareness because it promotes mindkilled attitude."

0Said Achmiz
Actually, I think it's precisely the parallels to racism and sexism that are invalid. Perhaps ableism? That's closer, at any rate, if still not really the same thing.
It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!
No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.
Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change. Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?
No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).
In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing). Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument: For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.
Exactly. This is because the overall goal is increasing utility, and not a societal norm of non-discrimination. (This is of course assuming that we are consequentialists.) My arguments against discrimination/speciesism apply at the normative level, when we are trying to come up with a definition of utility. I wouldn't classify this as sexism/racism. If there are sound reasons for considering the properties in question relevant, then treating beings of different species differently because of a correlation between species, and not because of the species difference itself, is in my view not a form of discrimination. As I wrote:
It's not sexist to say that women are more likely to get breast cancer. This is a differentiation based on sex, but it's empirically founded, so not sexist. Similarly, we could say that ants' behavior doesn't appear to be affected by narcotics, so we should discount the possibility of their suffering. This is a judgement based on species, but is empirically founded, so not speciesist. Things only become ___ist if you say "I have no evidence to support my view, but consider X to be less worthy solely because they aren't in my race/class/sex/species." I genuinely don't think anyone on LW thinks speciesism is OK.
8Said Achmiz
You evade the issue, I think. It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."? Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.) No one is saying "I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?" We have tons of empirical data about differences between the species. The argument is about exactly which of the differences matter, and that is unlikely to be settled by passing the buck to empiricism.
Upvoted just for this.
I wouldn't say it is, but other people would use the word “sexist” with a broader sense than mine (assuming that each person defines “sexism” and “racism” in analogous ways).
No. Because your statement "X is less worthy because they aren't of my gender" in that case is synonymous with "X is less worthy because they lack attribute Y", and so gender has left the picture. Hence it can't be sexist.
4Said Achmiz
Ok, but if you construe it that way, then "X is less worthy just because of their gender" is a complete strawman. No one says that. What people instead say is "people of type T are inferior in way W, and since X is a T, s/he is inferior in way W". Examples: "women are less rational than men, which is why they are inferior, not 'just' because they're women"; "black people are less intelligent than white people, which is why they are inferior, not 'just' ..."; etc. By your construal, are these things not sexist/racist? But then neither is this speciesist: "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans".
I think we are getting into a discussion about definitions, which I'm sure you would agree is not very productive. But I would absolutely agree that your statement "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans" is not speciesist. (It is empirically unlikely though.)
0Said Achmiz
Agreed entirely, let's not argue about definitions. Do we disagree on questions of fact? On rereading this thread, I suspect not. Your thoughts?
I think so? You seem to have indicated in a few comments that you don't believe nonhuman animals are "self-aware" or "conscious" which strikes me as an empirical statement? If this is true (and I give at least 30% credence that I've just been misunderstanding you), I'd be interested to hear why you think this. We may not end up drawing the moral line at the same place, but I think consciousness is a slippery enough subject that I at least would learn something from the conversation.
-1Said Achmiz
Ok. Yes, I think that nonhuman animals are not self-aware. (Dolphins might be an exception. This is a particularly interesting recent study.) Dolphins aside, we have no reason to believe that animals are capable of thinking about themselves; of considering their own conscious awareness; of having any self-concept, much less any concept of themselves as persistent conscious entities with a past and a future; of consciously reasoning about other minds, or having any concept thereof; or of engaging in abstract reasoning or thought of any kind. I've commented before that one critical difference between "speciesism" and racism or sexism or other such prejudices is that a cow can never argue for its own equal treatment; this, I have said, is not a trivial or irrelevant fact. And it's not just a matter of not having the vocal cords to speak, or of not knowing the language, or any other such trivial obstacles to communication; a cow can't even come close to having the concepts required to understand human behavior, human concepts, and human language. Now, you might not think any of this is morally relevant. Fine. But I would meet with great skepticism — and, sans compelling evidence, probable outright dismissal — any claim that a cow, or a pig, or, even more laughably, a chicken, is self-aware in anything like the sense I outlined above. (By the way, I am reluctant to commit to any position on "consciousness", merely because the word is used in such a diverse range of ways.)
Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition"] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.
-2Said Achmiz
You are right, the mirror test is evidence of self-concept. I do not take it to be nearly sufficient evidence, but it is evidence. This supports my view that very young humans are not self-aware (and therefore not morally important) either.
Could you possibly say a bit more about why the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't themselves important - only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn't have the meta-cognitive capacity to self-reflect on such states. But I don't think we are ethically entitled to induce them - any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.
3Said Achmiz
Surely it is a reach to say that the mirror test, alone, with all of its methodological difficulties, can all by itself raise our probability estimate of a creature's possessing self-awareness to near-certainty? I agree that it's evidence, but calling it a test is pushing it, to say the least. To see just one reason why I might say this, consider that we can, right now, probably program a robot to pass such a test; such a robot would not be self-aware. As for the rest of your post, I'd like to take this opportunity to object to a common mistake/ploy in such discussions: "This general ethical principle/heuristic leads to absurdity if applied with the literal-mindedness of a particularly dumb algorithm, therefore reductio ad absurdum." Your argument here seems to be something like: "Adult humans are sometimes not self-aware, but we still care about them, even during those times. Is self-awareness therefore irrelevant??" No, of course it's not. It's a complex issue. But a chicken is never self-aware, so the point is moot. Also: Please provide a citation for this, and I will response, as my knowledge of this topic (cognitive capacity during states of extreme panic) is not up to giving a considered answer. Having experienced a panic attack on one or two occasions, I am inclined to agree. However, I did not lose my self-concept at those times. Finally: "Ethically entitled" is not a very useful phrase to use in isolation; utilitarianism[1] can only tell us which of two or more world-states to prefer. I've said that I prefer that dogs not be tortured, all else being equal, so if by that you mean that we ought to prefer not to induce panic states in pigs, then sure, I agree. The question is what happens when all else is not equal — which it pretty much never is. [1] You are speaking from a utilitarian position, yes? If not, then that changes things; "ethically entitled" means something quite different to a deontologist, naturally.
Um, "Why don't we stop caring about people who temporarily lose this supposed be-all and end-all of moral value" seems like a valid question, albeit one you hopefully are introspective enough to have an answer for.
1Said Achmiz
Is the question "why don't we temporarily stop caring about people who temporarily lose this etc."? If so, then maybe we should, if they really lose it. However, please tell me what actions would ensue from, or be made permissible by, a temporary cessation of caring, provided that I still care about that person after they return from this temporary loss of importance.
That depends on the details of your personal moral system, doesn't it? As I said already, you may well be consistent on this point, but you have not explained how.
Try telling a mother that her baby is not morally important. (I would recommend some training in running and ducking before doing that...)
I find the idea that babies aren't morally important highly unlikely, but did you have to pick the most biased possible example?
0Said Achmiz
Is this a rebuttal, or merely a snarky quip? If the latter, then carry on. If the former, please elaborate.
Both. I like multiple levels of meaning. In particular, think about it in the context of whether morality is objective or subjective, what makes subjective opinions morally acceptable, and what is the role of evidence in all this. Specifically, do you think there's any possible evidence that could lead to you and a mother attaching the same moral importance to her baby?
0Said Achmiz
Is there any evidence that could lead to the mother assigning her baby the same value as I do? Couldn't tell you. (I've never been a mother.) Vice versa? Probably not. After all, it's possible that two agents are in possession of the same facts, the same true beliefs, and nonetheless have different preferences. So evidence doesn't do very much for us, here. In any case, your objection proves too much: after all, try telling a dog owner that his dog is not morally important. For extra laughs, try telling the owner of a custom-built, lovingly-maintained hot rod that his car is not morally important. People (myself included) get attached to all manner of things. We have to distinguish between valuing something for its own sake (i.e. persons), and valuing things that those persons value (artwork, music, babies, cars, dogs, elegant math theorems, etc.).
I quite agree, but evidently that's a point of contention on this thread. That is true, but I think my quip still stands. I suspect that the mother in my example would strongly insist that the moral value of the baby is high for its own sake and not just because she happens to love the baby (along with her newly remodeled kitchen). Would you call her mistaken?
2Said Achmiz
Only if she agrees with me that self-awareness is a key criterion for moral relevance. Of course, the real answer is that mothers are almost never capable of reasoning rationally about their children, especially in matters of physical harm to the child, and especially when the child is quite young. So the fact that a mother would, in fact insist on this or that isn't terribly interesting. (She might also insist that her baby is objectively the cutest baby in the maternity ward, but so what?)
Same would apply to other things in SaidAchmiz's list, too.
I don't think that is true. For a dog, maybe, for a hot rod, definitely not.
0Said Achmiz
What about for the Mona Lisa?
Things are not persons and their price or symbolism does not affect that.
2Said Achmiz
My point was: many people would say that the existence of the Mona Lisa is independently good, that it has value for its own sake, regardless of any individual person's appreciation of it. They would be talking nonsense, of course. But they would say it. Just like the mother with the baby. Edit: Also what Nornagest said.
I'm not sure most people treat personhood as the end of the story. It's not uncommon to talk about artistic virtuosity or historical significance as a source of intrinsic value: watch the framing the next time a famous painting gets stolen or a national museum gets bombed or looted in wartime. Granted, it seems clear to me that these things are only important if there are persons to appreciate them, but the question was about popular intuitions, not LW-normative ethics.
The question of whether the aesthetic value of beautiful objects can be terminal is an interesting but unrelated question.
2Said Achmiz
Unrelated to what...? The discussion has gone like so: SaidAchmiz: Babies are not morally important. Lumifer: A mother would disagree! SaidAchmiz: Yeah, but that doesn't tell us much, because someone might also disagree with the same thing about the Mona Lisa (Implication: And there, they would clearly be wrong, so the fact that a person makes such a claim is not particularly meaningful.)
Well, do you disagree WRT conclusions? Are you, in fact, a vegetarian?
0Said Achmiz
Nope, definitely not a vegetarian. I think that's a broader topic though.
To be absolutely clear: you agree that nonhumans are probably self-aware, feel pain, and so on and so forth, and are indeed worthy of moral consideration ... but for reasons not under discussion here, you are not a vegetarian? Fair enough, I guess. EDIT: Apparently not.
5Said Achmiz
Huh? What? Have you been reading my posts?? Are you perhaps confusing me with someone else...? (Though I haven't seen anyone else here take the position you describe either...) Yes, I think nonhumans almost certainly feel pain; no, I don't think they're self-aware; no, I don't think they're worthy of moral consideration. Edit: I don't mean to be harsh on you. Illusion of transparency, I suppose?
No, not really. I just read the post where you said you two agreed on facts and was confused - this is why.
Ah, the slaying of a beautiful hypothesis by one little ugly fact... :-D I do feel speciesism is perfectly fine.
Same here, I think speciesism is a fine heuristic here and now (it may not be so in the future).
If it's a heuristic, then it's not speciesism. If it's a "heuristic" that overrides lots of evidence, then it's speciesism. Which is just another way of saying that you aren't performing a Bayesian update correctly.
The issue, though, is not that beliefs are founded on no evidence. Rather, it is that they are founded on insufficient evidence. It would, in my estimation, require some strange, inhuman bigot to say such a thing; rather, they will hold up their prejudices based on evidence which sounds entirely reasonable to them. There is nearly always a justification for treating the other tribe poorly; healthy human psychology doesn't do well with baseless discrimination, so it invents (more accurately, seeks out with a hefty does of confirmation bias) reasons that its discrimination is well-founded. In this case, the fact that ants do not appear to be affected by narcotics is evidence that they are different from humans, but it seems that it is insufficient to discount their suffering. I am very curious, however, as to why a lack of behavioral reaction to narcotics indicates that ant suffering is morally neutral. I feel that there is an implicit step I missed there.
The question of pain in insects is incredibly complicated, so please don't take my glib example as anything more than that. But if ants don't have something analogous to opiods, then that would indicate that pain is never "bad" for them, which would be an (non-conclusive) indication they don't suffer.
Maybe I was already mindkilled (vegetarian speaking), but it seems like a precisely appropriate term to use, given the content of this post. What term would you prefer? [Bonus points: if racism and speciesism were well-known errors of the past, would sexist!you object to the term "sexism" on the same grounds?]
Humanism, maybe. Yes.
That's taken, though ... but then it's been taken before, and repurposed, it's such a catchy word with such lovely connotations.

I would prefer to see posts like this in the Discussion section.

May I ask why?
I think Main should be for posts that directly pertain to rationality. This post doesn't seem to do that. That said, my standards for what belongs in main seem somewhat different from those of other users. For instance I think "The Robots, AI, and Unemployment Anti-FAQ" belongs in Discussion as well, and that post is not only in Main but promoted to boot.

Since grandparent received so many upvotes, I'm going to explain my reasoning for posting in Main:

Rules of thumb:

Your post discusses core Less Wrong topics.

The material in your post seems especially important or useful.


(At least one of) LW's primary goal(s) is to get people thinking about far future scenarios to improve the world. LW is about rationality, but it is also about ethics. Whether anti-speciesism is especially important or useful is something that people have different opinions on, but the question itself is clearly important because it may lead to different/adjusted prioritizing in practice.

I disagree with the FAQ in that respect (among others-- see for instance my thoughts on the use of the term "tapping out"). My preference is that people only post to Main if their post discusses core Less Wrong topics, and maybe not even then.
3Said Achmiz
Upvoted for the "directly pertain to rationality" rule of thumb; I agree with that. That said, I thought that the Anti-FAQ was appropriate for Main.
The anti-FAQ was of much higher quality.

A generic problem with this type of reasoning is some form of the repugnant conclusion. If you don't put a Schelling fence somewhere, you end up with giving more moral weight to a large enough amount of cockroaches, bacteria or viruses than to that of humans.

In actuality, different groups of people implicitly have different Schelling points and then argue whose Schelling point is morally right. A standard Schelling point, say, 100 years ago, was all humans or some subset of humans. The situation has gotten more complicated recently, with some including only humans, humans and cute baby seals, humans and dolphins, humans and pets, or just pets without humans, etc.

So a consequentialist question would be something like

Where does it make sense to put a boundary between caring and not caring, under what circumstances and for how long?

Note this is no longer a Schelling point, since no implicit agreement of any kind is assumed. Instead, one tests possible choices against some terminal goals, leaving morality aside.


I feel like you're saying this:

"There are a great many sentient organisms, so we should discriminate against some of them"

Is this what you're saying?

EDIT: Sorry, I don't mean that bacteria or viruses are sentient. Still, my original question stands.

All I am saying is that one has to make an arbitrary care/don't care boundary somewhere. and "human/non-human" is a rather common and easily determined Schelling point in most cases. It fails in some, like the intelligent pig example from the OP, but then every boundary fails on some example.
Where does sentience fail as a boundary?
if sentience isn't a boolean condition.
Why do you say that? Bacteria, viruses etc. seem to lack not just one, but all of the capacities A-H the OP mentioned.
2Said Achmiz
Indeed. I've alluded to this before as "how many chickens would I kill/torture to save my grandmother?" The answer, of course, is N, where N may be any number. This means that, if we start with basic (total) utilitarianism, we have to throw out at least one of the following: 1. Additive aggregation of value. 2. Valuing my grandmother a finite amount (as opposed to an infinite amount). 3. Valuing a chicken a nonzero amount. Throwing out #2 leads to incorrect results (it is not the case that I value my grandmother more than anything else — sorry, grandma). Throwing out #1 is possible, and I have serious skepticism about that one anyway... but it also leads to problems (don't I think that killing or torturing two people is worse than killing or torturing one person? I sure do!). Throwing out #3 seems unproblematic.
Relatedly, you could choose to throw out your ability to assess N. When you say N could be any number, that looks to me like scope neglect. I don't have a good sense of what a billion chickens is like, or what a billionth chance of dying looks like, and so I don't expect my intuitions to give good answers in that region. If you ask the question as "how many chickens would I kill/torture to extend my grandmother's life by one second?", then if you do actually value chickens at zero then the answer will again be N, but that seems much less intuitive. So it looks like an answer to the 'save' question that avoids the incorrect results is something like "I don't know how many, but I'm pretty sure it's more than a million."
0Said Achmiz
The answer is, indeed, still the same N. I don't find scope neglect to be a serious objection here. It's certainly relevant in cases of inconsistencies, like the classic "how much would you pay to save a thousand / a million birds from oil slicks" scenario, but where is the inconsistency here? Scope neglect is a scaling error — what quantity is it you think I am scaling incorrectly? The "scope neglect" objection also misconstrues what I am saying. When I say "I would kill/torture N chickens to save my grandmother", I am here telling you what I would, in fact, do. Offer me this choice right now, and I will make it. This is the input to the discussion. I have a preference for my grandmother's life over any N chickens, and this is a preference that I support on consideration — it is reflectively consistent. For "scope neglect" to be a meaningful objection, you have to show that there's some contradiction, like if I would torture up to a million chickens to give my grandmother an extra day of life, but also up to a million to give her an extra year... or something to that effect. But there's no contradiction, no inconsistency.
When I imagine sacrificing one chicken, it looks like a voodoo ritual or a few pounds of meat, worth maybe tens of dollars. When I imagine sacrificing a thousand chickens, it looks like feeding a person for several years, and maybe tens of thousand dollars. When I imagine sacrificing a million chickens, it looks like feeding a thousand people for several years, and maybe tens of millions of dollars. When I imagine sacrificing a billion chickens, it looks like feeding millions of people for several years, and a sizeable chunk of the US poultry industry. When I imagine sacrificing a trillion chickens, it looks like feeding the population of the US for a decade, and several times the global poultry industry. (I know this is in terms of their prey value, but since I view chickens as prey that's how I imagine them, not in terms of individual subjective experience.) And that's only 1e9! There are lots of bigger numbers. What I meant by scope neglect was it looked to me like you took the comparison between one chicken and one human and rounded your impression of their relative values to 0, rather than trying to find a level where you're indifferent between them. When I imagine weighing one person against the global poultry industry, it's not obvious to me that one person is the right choice, and it feels to me that if it's not obvious, you can just increase the number of chickens. One counterargument to this is "but chickens and humans are on different levels of moral value, and it's wrong to trade off a higher level for a lower level." I don't think that's a good approach to morality, and I got the impression that was not your approach since you were reluctant to throw out #2 (which many people who do endorse multi-level moralities are willing to do).
0Said Achmiz
I... don't see how your examples/imagery answer my question. It is completely obvious to me. (I assume by "global poultry industry" you mean "that number of chickens", since if we literally eradicated global chicken production, lots of bad effects (on humans) would result.) Don't be so sure! Multi-level morality, by the way, does not necessarily mean that my grandmother occupies the top level all by herself. However, that's a separate discussion; I started this subthread from an assumption of basic utilitarianism. Anyway, I think — with apologies — that you are still misunderstanding me. Take this: There is no level where I'd be indifferent between them. That's my point. Why would I try to find such a level? What moral intuition do you think I might have that would motivate me to try this?
Yes and no. I wasn't aware that you were using a multi-level morality, but agree with you that it doesn't obviously break and doesn't require infinite utilities in any particular level. That said, my experience has been that every multi-level morality I've looked at hard enough has turned out to map to the real line, but because of measurement difficulties it looked like there were clusters of incomparable utilities. It is very hard to tell the difference between a chicken being worth 0 people and 1e-12 people and 1e-24 people, and so when someone says that it's 0 I don't take their confidence as informative. If they're an expert in decision science and eliciting this sort of information, then I do take it seriously, but I'm still suspicious that This Time It's Different. Another big concern here is revealed preferences vs. stated preferences. Many people, when you ask them about it, will claim that they would not accept money in exchange for a risk to their life, but then in practice do that continually- but on the level where they accept $10 in exchange for a millionth chance of dying, for example. One interpretation is that they're behaving irrationally, but I think the more plausible interpretation is that they're acting rationally but talking irrationally. (Talking irrationally can be a rational act, like I talk about here.)
1Said Achmiz
Well, as far as revealed vs. stated preferences go, I don't think we have any way of subjecting my chicken vs. grandmother preference to a real-world test, so I suppose You'll Just Have To Take My Word For It. As for the rest... What would it mean for me to be mistaken about this? Are you suggesting that, despite my belief that I'd trade any number of chickens to save my grandmother, there's some situation we might encounter, some really large number of chickens, faced with which I would say: "Well, shit. I guess I'll take the chickens after all. Sorry, grandma"? I find it very strange that you are taking my comments to be statements about which particular real number value I would assign to a single chicken. I certainly do not intend them that way. I intend them to be statements about what I would do in various situations; which choice, out of various sets of options, I would make. Whether or not we can then transform those preferences into real-number valuations of single chickens, or sets of many chickens, is a question we certainly could ask, but the answer to that question is a conclusion that we would be drawing from the givens. That conclusion might be something like "my preferences do not coherently translate into assigning a real-number value to a chicken"! But even more importantly, we do not have to draw any conclusion, assign any values to anything, and it would still, nonetheless, be a fact about my preferences that I would trade any number of chickens for my grandmother. So it does not make any sense whatsoever to declare that I am mistaken about my valuation of a chicken, when I am not insisting on any such valuation to begin with.
Basically, what you suggested, but generally it manifests in the other direction- instead of some really large number of chickens, it manifests as some really small chance of saving grandma. I should also make clear that I'm not trying to convince you that you value chickens, but that it makes more sense to have real-valued utilities for decision-making than multi-level utility. This is mostly useful when thinking about death / lifespan extension and other sacred values, where refusing to explicitly calculate means that you're not certain the marginal value of additional expenditure will be equal across all possible means for expenditure. For this particular case, it's unlikely that you will ever come across a situation where the value system "grandma first, then chickens" will disagree with "grandma is worth a really big number of chickens," and separating the two will be unlikely to have any direct meaningful impact. But I think the use of these considerations is to develop skills that are useful in other areas. If you are engaged in tradeoffs between secular and sacred values, then honing your skills here can give you more of what you hold sacred elsewhere. I also think it's important to cultivate a mentality where a 1e-12 chance of saving grandma feels different from a 1e-6 chance of saving grandma, rather than your mind just interpreting them both as "a chance of saving grandma."
1Said Achmiz
Any chance of saving my grandmother is worth any number of chickens. Well, ok. I am not committed to a multi-level system; I was only formulating a bit of skepticism. That being said, if we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation. (Or perhaps even just giving up on utilitarianism as The One Unified Moral System. There are reasons to suspect we might have to do this anyway.) Perhaps. But you yourself say: So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".
Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn't by itself disutility. Disutility is X dead grandmas, where X = N / googleplex. Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by preference, and then any real-value assignment of value to those states is just adding unneeded degrees of freedom. It's just that real values happen to be also be (conveniently) strictly ordered and, when value is actually additive, produce proper orderings for as-yet-unconsidered universe-states. As this comment points out, the additivity of the value of two events which have dependencies has no claim on their additivity when completely independent. Having two pillows isn't having one pillow twice. So I actually don't think you have to give this up to remain rational. Rationality is creating heuristics for the ideal version of yourself, a self of course which isn't ideal in any fundamental sense but rather however you choose to define it. Let's call this your preferred self. You should create heuristics that cause you to emulate your preferred self such that your preferred self would choose you out of any of your available options for doing metaethics, when applying you to the actual moral situations you'll have in your lifetime (or a weighted-by-probability integral over expected moral situations). What I'm saying is that I wouldn't be surprised if that choice has you taking the Value(Chicken) = 0 heuristic. But I do think that the theory doesn't check out, that your preferred self only has theories that checks out, and that most simple explanation for how he forms strict orderings of universe states involves real-number assignment. This all to say, it's not often we need to weigh the moral value of googleplex chickens over grandma, but if it ever came to that we should prefer to do it right.
0Said Achmiz
Because, as you say: Indeed, and the right answer here is choosing my grandmother. (btw, it's "googolplex", not "googleplex") Indeed; but... They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer. Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.) Now here, I am not actually sure what you're saying. Could you clarify? What theory?
I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you're not willing to think there isn't a consistent reconciliation. My point about your preferred ethical self is that for him to be a formal agent that you wish to emulate, he is required to have a consistent reconciliation. The suggestion is that most people who claim M = 0, insofar as it relates to N, create inconsistencies elsewhere when trying to relate it to O, P, and Q. Inconsistencies which they as flawed agents are permitted to have, but which ideal agents aren't. The theory I refer to is the one that takes M = 0. These are the inconsistencies that the multi-level morality people are trying to reconcile when they still wish to claim that they prefer a dying worm to a dying chicken. Suffice to say that I don't think an ideal rational agent can reconcile them, but other point was that our actual selves aren't required to (but that we should acknowledge this).
2Said Achmiz
I see. I confess that I don't find your "preferred ethical self" concept to be very compelling (and am highly skeptical about your claim that this is "what rationality is"), but I'm willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread. You shouldn't take me to have any kind of "theory that takes M = 0"; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else. My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues. Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we're done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don't know. (For what it's worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)
Yeah probably. To be honest I'm still rather new to the rodeo here, so I'm not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn't listen to me :) I'm sure it's been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It's probably not obvious why there would be problems with every proposed difficulty for additive aggregation. I would probably annoyingly often fall back on the claim that any particular case doesn't satisfy the criteria but that additive value still holds. I don't think it'd be a lengthy list of criteria though. All you need is causal independence. The kind of independence that makes counterfactual (or probabilistic) worlds independent enough to be separable. You disvalue a situation where grandma dies with certaintly equivalently with a situation where all of your 4 grandmas (they got all real busy after the legalization of gay marriage in their country) are subjected to 25% likelihood of death. You do this because you value the possible worlds equally according to their likelihood, and you sum the values. My intuition that refusing to not also sum the values in analogous non-probabilistic circumstances would cause inconsistencies down the line, but I'm not sure.
Suppose you're walking down the street when you see a chicken trapped under a large rock. You can save it or not. If you save it, it costs you nothing except for your time. Would you save it?
0Said Achmiz
Maybe. Realistically, it would depend on my mood, and any number of other factors. Why?
If you would save the chicken, then you think its life is worth 10 seconds of your life, which means you value its life as about 1/200,000,000th of your life as a lower bound.
0Said Achmiz
In your view, how much do I think the chicken's life is worth if I would either save it or not save it, depending on factors I can't reliably predict or control? If I would save it one day, but not save it the next? If I would save a chicken now, and eat a chicken later? I don't take such tendencies to be "revealed preferences" in any strong sense if they are not stable under reflective equilibrium. And I don't have any belief that I should save the chicken. Edit: Removed some stuff about tendencies, because it was actually tangential to the point.
It is problematic once you start fine-graining, exactly like in the dust specks/torture debate, where killing a chicken ~ dust speck and killing your grandma ~ torture. There is almost certainly an unbroken chain of comparables between the two extremes.
0Said Achmiz
For what it's worth, I also choose specks in specks/torture, and find the "chain of comparables" argument unconvincing. (I'd be happy to discuss this, but this is probably not the thread for it.) That, however, is not all that relevant in practice: the human/nonhuman divide is wide (unless we decide to start uplifting nonhuman species, which I don't think we should); the smartest nonhuman animals (probably dolphins) might qualify for moral consideration, but we don't factory-farm dolphins (and I don't think we should), and chickens and cows certainly don't qualify; the question of which humans do or don't qualify is tricky, but that's why I think we shouldn't actually kill/torture them with total impunity (cf. bright lines, Schelling fences, etc.). In short, we do not actually have to do any fine-graining. In the case where we are deciding whether to torture, kill, and eat chickens — that is, the actual, real-world case — my reasoning does not encounter any problems.
Do you assign any negative weight to a suffering chicken? For example, is it OK to simply rip off a leg from a live one and make a dinner, while the injured bird is writhing on the ground slowly bleeding to death?
0Said Achmiz
Sure. However, you raise what is in principle a very solid objection, and so I would like to address it. Let's say that I would, all else being equal, prefer that a dog not be tortured. Perhaps I am even willing to take certain actions to prevent a dog from being tortured. Perhaps I also think that two dogs being tortured is worse than one dog being tortured, etc. However, I am willing to let that dog, or a million dogs, or any number of dogs, be tortured to save my grandmother from the same fate. What are we to make of this? In that case, some component of our utilitarianism might have to be re-examined. Perhaps dogs have a nonzero value, and a lot of dogs have more value than only a few dogs, but no quantity of dogs adds up to one grandmother; but on the other hand, some things are worth more than one grandmother (two grandmothers? all of humanity?). Real numbers do not behave this way. Perhaps they are not a sufficient number system for utilitarian calculations. (Of course, it's possible to suppose that we could, if we chose, construct various hypotheticals (perhaps involving some complex series of bets) which would tease out some inconsistency in that set of valuations. That may be the case here, but nothing obvious jumps out at me.)
This sounds a bit like the dustspeck vs. torture argument, where some claim that no number of dustspecks could ever outweigh torture. I think that there we have to deal with scope insensitivity. On the utilitarian aggregation I recommend section V of following paper. It shows why the alternatives are absurd.
0Said Achmiz
By the way, the dogs vs. grandma case differs in an important way from specks vs. torture: The specks are happening to humans. It is not actually inconsistent to choose TORTURE in specks/torture while choosing GRANDMA in dogs/grandma. All you have to do is value humans (and humans' utility) while not valuing dogs (or placing dogs on a "lower moral tier" than your grandmother/humans in general). In other words, "do many specks add up to torture" and "do many dogs add up to grandma" are not the same question.
That seems a little bit ad hoc to me. Either you care about dogs (and then even the tiniest non-zero amount of caring should be enough for the argument) or you don't. People often come up with lexical constructs when they feel uncomfortable with the anticipation of having to change their behaviour. As a consequentialist, I figured out that I care a bit about dog welfare, and being aware of my scope insensitivity, I can see why some people dislike biting the bullet which results from simple additive reasoning. An option would be, though, to say that one's brain (and anyhow therefore one‘s moral framework) is only capable of a certain amount of caring for dogs and that this variable is independent of the number of dogs. For me that wouldn't work out though for I care about the content of sentient experience in a additive way. But for the sake of the argument : if a hyperintelligent alien came to the Earth (eg. an AI), what would you propose how the alien should figure out which mechanisms in the universe should be of moral concern? What would you think of the agent’s morality if it discounted your welfare lexically?
3Said Achmiz
Eliezer handled this sort of objection in Newcomb's Problem and Regret of Rationality: What you are doing here is insisting that I conform to your ritual of cognition (i.e. total utilitarianism with real-number valuation and additive aggregation). I see no reason to accede to such a demand. The following are facts about what I do and don't care about: 1) All else being equal, I prefer that a dog not be tortured. 2) All else being equal, I prefer that my grandmother not be tortured. 3) I prefer any number of dogs being tortured to my grandmother being tortured. 4 through ∞) Some other stuff about my preferences, skipped for brevity. #s 2 and 3 are very strong preferences. #1 is less so. Now I want to find a moral calculus that captures those facts. You, on the other hand, are telling me that, first, I must accept your moral calculus, and then, that if I do so, I must toss out one of the aforementioned preferences. I decline to do either of those things. (As Eliezer says in the above link: The utility function is not up for grabs.) I don't know. This is the kind of thing that demonstrates why we need FAI theory and CEV. I would think that its morality is different from mine. Also, I would be sad, because presumably such a morality on the AI's part would result in bad things for me. Your point?
Ok, let's do some basic friendly AI theory: Would a friendly AI discount the welfare of "weaker" beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly. My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let's assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it'll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: "No I don't draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog." But I think that this argument doesn't work here for we deal with a lexical prioritization. How would you act in such a scenario?
You can ask the same question with the grandmother turning into a tree instead of into a dog.
Identity isn't in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
0Said Achmiz
Jiro's response shows one good reason why I don't find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say "I have no idea what I would prefer", much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me. On to FAI theory: By definition, it would not, because if it did, then it would be an Unfriendly AI. How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky "is-ought" transitions that bedeviled Hume! Corollary: why should we care that our behavior results in bad things for animals? Isn't that the question in the first place, and doesn't your statement beg said question?
0Said Achmiz
As I've said elsewhere in this thread, I also choose SPECKS in specks/torture. As for the paper, I will read it when I have time, and try to get back to you with my thoughts. Edit: And see this thread for a discussion of whether scope neglect applies to my views.
Hence this recent post on surreal utilities.
My suspicion that what has to give is the assumption of unlimited transitivity in VNM, but I never bothered to flesh out the details.
Actually, I believe it's the continuity axiom that rules out lexicographic preferences.
I examined this one, too, but the continuity axiom intuitively makes sense for comparables, except possibly in cases of extreme risk aversion. I am leaning more toward abandoning the transitivity chain when the options are too far apart. Something like A > B having some uncertainty increasing with the chain length between A and B or with some other quantifiable value.
0Said Achmiz
This point is of course true, hence my "all else being equal" clause. I do not actually spend any time helping dogs, for pretty much exactly the reasons you list: there are matters of human benefit to attend to, and dogs are strictly less important. Your last paragraph is mostly moot, since the behavior you allude to is not at all my actual behavior, but I would like to hear a bit more about the behavior model you refer to. (A link would suffice.) I'm not entirely sure what the relevance of the speed limit example is.
The problem with throwing out #3 is you also have to throw out: (4) How we value a being's moral worth is a function of their abilities (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above) Which is a rather nice proposition. Edit: As Said points out, this should be: (4) How we value a being's pain is a function of their ability to feel pain (or other faculties that in some way relate to pain, e.g. the abilities A-G listed above)
0Said Achmiz
You don't, actually. For example, the following is a function): Let a be a variable representing the abilities of a being. Let E(a) be the ethical value of a being with abilities a. The domain is the nonnegative reals, the range is the reals. Let H be some level of abilities that we have chosen to identify as "human-level abilities". We define E(a) thus: a < H : E(a) = 0. a ≥ H: E(a) = f(a), where f(x) is some other function of our choice.
Fair enough. I've updated my statement: Otherwise we could let H be "maleness" and justify sexism, etc.
0Said Achmiz
Uh, would you mind editing your statement back, or adding a note about what it said before? Otherwise I am left looking like a crazy person, spouting non sequiturs ;) Edit: Thanks! Anyway, your updated statement is no longer vulnerable to my objection, but neither is it particularly "nice" anymore (that is, I don't endorse it, and I don't think most people here who take the "speciesist" position do either). (By the way, letting H be "maleness" doesn't make a whole lot of sense. It would be very awkward, to say the least, to represent "maleness" as some nonnegative real number; and it assumes that the abilities captured by a are somehow parallel to the gender spectrum; and it would make it so that we value male chickens but not human women; and calling "maleness" a "level of abilities" is pretty weird.)
Haha, sure, updated. But why don't you think it's "nice" to require abilities to be relevant? If you feel pain more strongly than others do, then I care more about when you're in pain than when others are in pain.
0Said Achmiz
I probably[1] do as well... ... provided that you meet my criteria for caring about your pain in the first place — which criteria do not, themselves, have anything directly to do with pain. (See this post). [1] Well, at first glance. Actually, I'm not so sure; I don't seem to have any clear intuitions about this in the human case — but I definitely do in the sub-human case, and that's what matters.
Well, if you follow that post far enough you'll see that the author thinks animals feel something that's morally equivalent to pain, s/he just doesn't like calling it "pain". But assuming you genuinely don't think animals feel something morally equivalent to pain, why? That post gives some high level ideas, but doesn't list any supporting evidence.
0Said Achmiz
I had a longer response typed out, about what properties make me assign moral worth to entities, before I realized that you were asking me to clarify a position that I never took. I didn't say anything about animals not feeling pain (what does it "morally equivalent to pain" mean?). I said I don't care about animal pain. ... the more I write this response, the more I want to ask you to just reread my comment. I suspect this means that I am misunderstanding you, or in any case that we're talking past each other.
I apologize for the confusion. Let me attempt to summarize your position: 1. It is possible for subjectively bad things to happen to animals 2. Despite this fact, it is not possible for objectively bad things to happen to animals Is that correct? If so, could you explain what "subjective" and "objective" mean here - usually, "objective" just means something like "the sum of subjective", in which case #2 trivially follows from #1, which was the source of my confusion.
0Said Achmiz
I don't know what "subjective" and "objective" mean here, because I am not the one using that wording. What do you mean by "subjectively bad things"?
My intuition here is solid to an hilariously unjustified degree on "10^20".

None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.

Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)

(Looking at the comments, Manfred makes a similar argument more vividly over here.)

If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn't because the post is already long and because I didn't want to make it look like I was just "knocking down a very weak argument or two". I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn't considered.

And then arguments A through E will not argue for treating the enhanced animals differently from humans. It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience. I think this is a hazard for any "Arguments against X" post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.
What level of "potential" is required here? A human baby has a certain amount of potential to reach whatever threshold you're comparing it against - if it's fed, kept warm, not killed, etc. A pig also has a certain level of potential - if we tweak its genetics. If we develop AI, then any given pile of sand has just as much potential to reach "human level" as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea). Your proposed category - "can develop to contain morally relevant quantity X" - tends to fail along similar edge cases as whatever morally relevant quality it's replacing.
I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday. Agreed, mostly. (I think it might be meaningful to refer to syntax or math as 'senses' in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)
Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?
I'm not sure what this would look like, actually. The first thing that comes to mind is Down's Syndrome, but the impression I get is that that's a much smaller reduction in cognitive capacity than the one you're describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down's, and I suspect that the more extreme the reduction, the easier it would be to come to that direction. I hope you don't mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don't think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.
Historically, we have dismissed very obviously sapient people as lacking moral worth (people with various mental illnesses and disabilities, and even the freaking Deaf). Since babies are going to have whatever-makes-them-people at some point, it may be more likely that they already have it and we don't notice, rather than they haven't yet. That's why I'm a lot iffier about killing babies and mentally disabled humans than pigs.
Speaking as a vegetarian for ethical reasons ... yes. That's not to say they don't deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.
My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.
It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don't turn into people.
Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?
Doesn't our current cloning technology allow us to turn any ordinary cell into a baby, albeit one with aging-related diseases?
Probably, but in such a world, I don't think human life would be scarce, and I think that the value of human life would plummet accordingly. They would still represent a significant time and capital investment, and so be more valuable than the em case, but I think that people would be seen as much more replaceable. It is possible that human reproduction is horrible by many moral standards which seem reasonable. I think it's more convenient to jettison those moral standards than reshape reproduction, but one could imagine a world where people were castrated / had oophorectomies to prevent gamete production, with reproduction done digitally from sequenced genomes. It does not seem obviously worse than our world, except that it seems like a lot of work for minimal benefit.
Is it possible to create some rule like this? Yeah, sure. The problem is that you have to explain why that rule is valid. If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it's not clear why since their phenomenal pain is identical.
It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant). These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.
What about a similar gradual rule for varying sentience levels of animal?
A quantitative measure of sentience seems much more reasonable than a binary measure. I'm not a biologist, though, and so don't have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from 'doesn't have a central nervous system' to 'beyond humans' to be possible, but don't know if there are bands that aren't occupied for various practical reasons.
I don't think anyone is advocating a binary system. No one is supporting voting rights for pigs, for example.
While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don't want to let absolutely everyone use their own discretion in difficult situations. Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that's actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.
Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people's moral intuitions, and so they don't need to explain why this is valid.
If you believe sole justification for a moral proposition is that you think it's intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?
I'm a moral anti-realist. I don't think there's a "true objective" ethics out there written into the fabric of the Universe for us to discover. That doesn't mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values. Nobody is ever "right" or "wrong" about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.
If we can only stop one, sure. If we could stop both, why not do so?
If Alice bets $10,000 against $1 on heads and Bob bets $10,000 against $1 on tails, they're both idiots, even though only one of them will lose.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd!

That's a common fallacy. Let me illustrate:

The notions of hot and cold water are nonsensical. The water temperature is continuous from 0C to 100C. How would you divide this into distinct areas? You would have to draw a line between neighboring values different by tiny fractions of a degree, but that seems absurd!

I'm not the one arguing for dividing this up into distinct areas, my whole point was to just look at the relevant criteria and nothing else. If the relevant criterion is temperature, you get a gradual scale for your example. If it is sentience, you have too look for each individual animal separately and ignore species boundaries for that.
Right, you're the one arguing for complete continuity in the species space and lack of boundaries between species. Similar to the lack of boundary between cold and hot water. I'm confused. You seem to think it's useful to sit by an anthill and test each individual ant for sentience..?
I think "animal" was used in the sense of "kind of animal" here.
For a morally relevant example, it is quite absurd to suppose that humans aged 18 years and 0 days are mature enough to vote, whereas humans aged 17 years and 364 days are not mature enough. So voting ages are morally unacceptable? Ditto: ages for drinking alcohol, sexual consent, marriage, joining the armed services etc.
Actually, there is a case to say that they are. Discrimination by category membership, instead of on a spectrum, means that candidates which have more merit are passed aside in favor of ones with lesser merit- particularly in the case of species, this can be problematic. The right of a person to be judged on their merits, if asked in abstract, would be accepted. The only counter-case I can think of it is to say that society simply does not have the resources to discriminate (since discrimination it is) more precisely. However, even this does not entirely work out as within limits society could easily improve it's classification methods to better allow for unusual cases.
The main advantage of simple discrimination rules is that they are less subject to Goodhart's law.
If you're going to say that "hot" and "cold" are absolute things rather than continous on a spectrum, yes. Similiarly, it is absurd to say that species is an absolute thing rather than an arbitrary system of classification imposed on various organisms which fit into types broadly at best.
The usual solution involving water temperature is to have levels of suitability. I want to shower in hot water, not cold water. Absurd? Not really. Just simplified. In fact, the joy I will gain from a shower is a continuous function of water temperature with a peak somewhere near 45C. The first formulation just approximated this with a piecewise line function for convenience. Carrying the analogy back, we can propose that the moral weight of suffering is proportional to the sentience of the sufferer. Estimating degrees of sentience now becomes important. ISTR that research review board have stricter standards for primates than rodents, and rodents than insects, so aparently this isn't a completely strange idea.

"If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist."

David Pearce sums up antispeciesism excellently saying:

"The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

sums up antispeciesism excellently saying: "The antispeciesist claims that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect."

If one takes "other things being equal" very seriously that could be quite vacuous, since there are so many differences in other areas, e.g. impact on society and flow-through effects, responsiveness of behavior to expected treatment, reciprocity, past agreements, social connectedness, preferences, objective list welfare, even species itself...

The substance of the claim has to be about exactly which things need to be held equal, and which can freely vary without affecting desert.

Any speciesist is happy to agree with that. She simply thinks that species is one of the things that has to be equal.
Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. :heart-warming yes, but irrational altruism - by antispeciesist criteria at any rate.) I try and say a bit more (without citing the Daily Mail) here:
I don't see how this is relevant to my argument. I'm just pointing out that your definition doesn't track the concept you (probably) have in mind; I wasn't saying anything empirical* at all. *other than about the topology of concept-space.
Larks, by analogy, could a racist acknowledge that, other things being equal, conscious beings of equivalent sentience deserve equal care and respect, but race is one of the things that has to be equal? If you think the "other things being equal" caveat dilutes the definition of speciesism so it's worthless, perhaps drop it - I was just trying to spike some guns.
If we drop the caveat, anti-speciesism is obviously false. For example, moral, successful people deserve more respect than immoral unsuccessful people, even if both are of equal sentience.
There are plenty of people who would disagree with that. But what do you mean by "respect", and on what grounds do you give it or withhold it?
2Said Achmiz
By the way... what the heck is "equivalent sentience", exactly?
Surely the antispeciesist claims that nothing else needs to be equal?

A fine piece. I hope it triggers a high-quality, non-mindkilled debate about these important issues. Discussion about the ethical status of non-human animals has generally been quite heated in the past, though happily this trend seems to have reversed recently (see posts by Peter Hurford and Jeff Kaufman).

Also, standard argument against a short, reasonable-looking list of ethical criteria: no such list will capture complexity of value. They constitute fake utility functions.

My utility function feels quite real to me and I prefer simplicity and elegance over complexity. Besides, I think you can still have lots of terminal values and not discriminate against animals (in terms of suffering), I don't think that's mutually exclusive.

Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form.

The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than "humans are a platonic form." Currently you just knock down a very weak argument or two and then rush to conclusion.

EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced "argument from future potential," which is what Jabberslythe replied to... (read more)

The section you quote from is quite obvious and I could probably have cut it down to a minimum given that this is LW. You make a good point, one could for instance have a utility function that includes a gradual continuum downwards in evolutionary relatedness or relevant capabilities and so on. This would be consistent and not speciesist. But there would be infinite ways of defining how steeply moral relevance declines, or whether this is linear or not. I guess I could argue "if you're going for that amount of arbitrariness anyway, why even bother?" The function would not just depend on outward criteria like the capacity for suffering, but also on personal reasons for our judgments, which is very similar to what I have summarized under H.
Yes, value is complex. So what? The utility function is not up for grabs.
I think the relevant point is the part about racism, sexism etc.. If allow moral value to depend on things other than the beings' relevant attributes, then sure we can be speciesist. But we also can be racist, sexist, ...
Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.

DISCLAIMER: the following is not necessarily my own opinions or beliefs, but rather done more in the spirit of steelmaning:

There seems to be a number of signs that the deciding factor might be the ability to form long term memories, especially if we go into very near mode.

  • It seems that if we extrapolate volition for an individual that is made to suffer with or without memory blocking in various sequences, and allowing it to chose tradeofs, it'll repeatedly observe clicking a button labelled "suffer horrific torture with suppressed memory" follo

... (read more)
Without also functioning as pain control, or in addition to that role? In either case, I'd be interested to know which anaesthetics these are; it seems like there might be interesting literature on them. (For instance, I'm curious to know whether they are first-line choices, or just used when there is no viable alternative.)
Yes, very interesting:
I don't know, if you find out please tell me.

While I was writing this comment, CarlShulman posted his, which makes essentially the same point. But since I already wrote it a longer comment, I'm posting mine too. (Writing quickly is hard!)

In practice we must have a quantitative model of how much "moral value" to assign an animal (or human). I think your position that:

x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads.

Is wrong, and the reasons for that fall out of your own arguments.

As you point out, ... (read more)

By this I meant literally the same amount (and intensity!) of suffering. So I agree with the point you and Carl Shulman make, if it is the case that some animals can only experience so much suffering, then it makes sense to value them accordingly. I'm arguing for 1), but I would only do it by species in order to save time for calculations. If I had infinite computing power, I would do the calculation for each individual separately according to indicators of what constitutes capacity for suffering and its intensity. Incidentally, I would also assign at least a 20% chance that brain size doesn't matter, some people in fact have this view. By "utilitarianism" I meant hedonistic utilitarianism in general, not your personal utility function that (in this scenario) differentiates between sapience and mere sentience. I added this qualifier because "you'd have to be okay with torturing babies" is not a reductio, since utilitarians would have to bite this bullet anyway if they could thereby prevent an even greater amount of suffering in the future. I only have my first-person evidence to go with. This bothers me a lot but I'm assuming that some day we will have solved all the problems in philosophy of mind and can then map out what we mean precisely by "sentience", having it correspond to specific implemented algorithms or brain states. I agree, those are simply the two premises the conclusion that we should value all suffering equally is based on. You end up with coherent positions by rejection one or both of the two.
What evidence do you have for thinking that your first-person intuitions about sentience "cut reality at its joints"? Maybe if you analyze what goes through your head when you think "sentience", and then try to apply that to other animals (never mind AIs or aliens), you'll just end up measuring how different those animals are from humans in a completely arbitrary and morally-unimportant implementation feature. If after solving all the problems of philosophy you found out something like this, would you accept it, or would you say that "sentience" was no longer the basis of your morals? In other words, why might you prefer this particular intuition to other intuitions that judge how similar something is to a human?
If I understand it correctly, this is the position endorsed here. I don't think realizing that this view is right would change much for me; I would still try to generalize criteria for why I care about a particular experience and then care about all instances of the same thing. However, I realize that this would make it much more difficult to convince others to draw the same lines. If the question of whether a given being is sentience translates into whether I have reasons to care about that being, then one part of my argument would fall away. This issue doesn't seem to be endemic to the treatment of non-human animals though, you'd have it with any kind of utility function that values well-being.

What properties do human beings possess that makes us think that it is wrong to torture them?

Does it have to be the case that "the properties that X possesses" is the only relevant input? It seems to me that the properties possessed by the would-be torturer or killer are also relevant.

For instance, if I came across a kid torturing a mouse (even a fly) I would be horrified, but I would respond differently to a cat torturing a mouse (or a fly).

What if it is done by a baby or a kid with mental impairments so she cannot follow moral/social norms? I see no reason to treat the situation differently in such a case. (Except that one might want to talk to the parents of the kid in order to have them consider a psychological check-up for their child.)
Differently from a normal kid, or differently from a cat? (I share Morendil's moral intuitions regarding his example.)
From the cat. I would in fact press a magic button that turns all carnivores into vegans. The cat (or the kid) doesn't know what it is doing and cannot be meaningfully blamed, but I still consider this to be a harmful action and I would want to prevent it. Who commits the act makes no difference to me (or only for indirect reasons).
Why? It seems to me like the only (consequentialist) justification is that they will then go on to torture others who have the ability to feel pain, and so it's still only the victims' properties which are relevant.
The more I perceive the torturer to be "like me", the more seeing this undermines my confidence in my own moral intuitions - my sense of a shared identity. The fly case is particularly puzzling, as I regard flies as not morally relevant.
I'd regard a kid pulling wings off a fly as worrying not because I particularly care about flies, but more because it indicates a propensity to do similar things to morally relevant agents. Not much chance of that becoming a problem for a cat.

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

People get anaesthesia before undergoing surgery and get drunk before risking social embarrassment all the time.

Animals are not walking around anaesthetized, and I don't think the primary reason why alcohol helps with pain is that it makes you dumber (I might be wrong about this).
Anaethesia reduces pain, which is the primary reason people take it. Getting drunk reduces inhibitions (which is good if you're trying to do something despite embarrasment), plus you tend not to remember the events afterwards. EDIT: Just trying to clarify ice9's point here, to be clear.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

Why should there be a "correct" solution for ethical reasoning? Is there a normative level regarding which color is the best? People function based on heuristics, which are calibrated on general cases, not on marginal cases. While I'm all for showing inconsistencies in one's statements, there is no inconsistenc... (read more)

If you want your choices to be consistent over time, you still need a meta-rule for choosing and modifying your rules. How do you know what exceptions to make? Personally, I don't think my choices (as a human) can be consistent in this sense, and I'm pretty resigned to following my inconsistent moral intuitions. Others disagree with me on this.
Your choices won't be consistent over time anyways, because you won't be consistent over time. For your Centenarian self, the current you is a but a distant memory.
That my desires won't be consistent over very long periods of time, is no reason to make my choices inconsistent over short periods of time when my desires don't change much.
Well, obviously this wouldn't hold for, say, paperclippers ... but while I suspect you may disagree, most people seem to think human ethics are not mutually contradictory and are, in fact, part of the psychological unity of humankind (most include caveats for psychopaths, political enemies, and those possessed by demons.) Such a (highly complex) rule is known as a "program".
As a bonus, the exception class of "enemies" and "immoral monsters" tends to be contrived to include anyone who has a sufficient degree of difference in ethical preferences. All True humans are ethically united...
I'm torn between grinning at how marvelously well-contrived it is on evolution's part and frustrated that, y'know, I have to live here, and I keep stepping in the mindkill. Of course, I'll note they're usually wrong. Except about some of the psychopaths, I suppose, though even they seem to contain bits of it if I understand correctly.
In context here, a "rule" is shorthand for a general rule, not for any sort of algorithm whatsoever. A rule that describes a specific case by name is not a general rule. Thought experiment: Go up to a random person and find out how they avoid the Repugnant Conclusion. Repeat with some other famous ethical paradoxes. Even if some of those have solutions, you can bet the average person 1) won't have thought about them, and 2) won't be able to come up with a solution that holds up to examination. Most people have not thought about enough marginal cases involving human ethics to be able to determine whether human ethics is mutually contradictory.
That was mostly a joke :) (My point, if you could call it such, was that morality need only be consistent, not simple - although most special cases turn out to be caused by bias, rather than actual special cases, so it was a rather weak point. And, apparently, a rather weak joke.) And yet, funnily enough,most people agree on most things, and the marginal cases are not unique for every person. Ethics, as far as I can tell, is a part of the psychological unity of mankind. That said, there is the much more worrying prospect that these common values could be internally incoherent, but we seem to have intuitions for resolving conflicts between lower-level intuitions and I think - hope - it all works out in the end. (Kawoomba has stated that he considers it ethical for a parent to destroy the earth rather than risk their family, though, so perhaps I'm being overly generous in this regard. pulls face)

I've read the first part of the post ("What is Speciesism?"), and have a question.

Does your argument have any answer to applying modus tollens to the argument from marginal cases?

In other words, if I say: "Actually, I think it's ok to kill/torture human newborns/infants; I don't consider them to be morally relevant[1]" (likewise severely mentally disabled adults, likewise (some? most?) other marginal cases) — do you still expect your argument to sway me in any way? Or no?

[1] Note that I can still be in favor of laws that prohibit infant... (read more)

The post you link to makes five points. 1) and 2) don't concern the arguments I'm making because I left out empirical issues on purpose. 3) is also an empirical issue that can be applied to some humans as well. 4) is the most interesting one. I sort of addressed this here. I must say I'm not very familiar with this position so I might be bad at steelmanning it, but so far I simply don't see why intelligence has anything to do with the badness of suffering. As for 5), this is certainly a valid thing to point out when people are estimating whether a given being is sentient or not. Regarding the normative part of this argument: If there were cute robots that I have empathy for but was sure they aren't sentient, I genuinely wouldn't argue about giving them moral consideration.
Huh, a mainstream term for what LWers call a Schelling fence!
No, this is indeed a common feature of coherentist reasoning, you can make it go both ways. I cannot logically show that you are making a mistake here. I may however appeal to shared intuitions or bring further arguments that could encourage you to reflect on your views. And note that I was silent on the topic of killing, the point I made later in the article was only focused on caring about suffering. And there I think I can make a strong case that suffering is bad independently of where it happens.
1Said Achmiz
I would very much like to see that case made!
It's in the article. If you're not impressed by it then I'm indeed out of arguments. There's also a hyperlink in the first paragraph referring to section 6 of the linked paper.
1Said Achmiz
Ok. Yeah, I don't find any of those to be strong arguments. Again, I would like to urge you to consider and address the points brought up in this post.
I think the relevant response would be torturing human infants, and other marginal cases.
0Said Achmiz
Yep, fair enough. I've changed my post to include this.
No, that would be when we fetch the pitchforks. The only time I heard such an argument, it wasn't their true rejection, and they invented several other such false rejections during the course of our discussion. So that would be my response on hearing someone actually make the unaddressed argument you outlined. Do such game-theoretic reasons actually hold together, by the way? It seems unlikely, unless you suddenly start caring about children somewhere during their first few years.
2Said Achmiz
No, this is definitely my true rejection. To expand a bit, take the infanticide case as an example: I think infanticide should be illegal, but I don't think it should be considered murder or anything close to it, nor punished nearly as severely. Basically, there's no "real" line between sapience and non-sapience, and humans, in the course of their development, start out as cognitively inert matter and end up as sapient beings. But since we don't think evaluating in every single case is feasible, or reliable in the "border region" cases, or likely to lead to consistently (morally) good outcomes in practice (due to assorted cognitive and institutional limitations), we want to draw the line way back in the development process, where we're sure there's no sapience and killing the developing human is morally ok. Where specifically? Well, since this is a pragmatic and not a moral consideration, there is no unique morally ordained line placement, but there is a natural "bright line": birth. Birth is more or less in the desired region of time, so that's where we draw it. Now, since we drew the line for pragmatic reasons, we are perfectly aware that the person who commits infanticide has not really done anything morally wrong. But on the other hand, we want to discourage people from redrawing the line on an individual basis, from "taking line placement into their own hands", so to speak, because then we're back to the "evaluating in every case is not a good idea" issue. But on the third hand, such discouragement should not take the form of putting the poor person in jail for murder! The problem is not that important; the well-being and happiness of an adult human for a large chunk of their life is worth more than the (nonzero, but small) chance that line degradation will lead to bad outcomes! Make it a lesser offense, and you've more or less got the best of both worlds. (Equivalent to assault, perhaps? I don't know, this is a practical question, and best settled with the h

Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others?

Typically human xenophobia doesn't single out one attribute. The similar are treated preferentially, the different are exiled, shunned, excluded or slaughtered. Nature builds organisms like that: to favour kin and creatures similar, and to give out-group members a very wide berth. So: it's no surprise to find that humans are often racist and speciesist.

“Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? “

I am not sure if this is accurate answer but I feel like bringing this up: in some cases we should single out some properties over the others based on their function in regards to our interests. Obvious example: separating men and women in combat sports.

Another important detail is that in ideal world we could evaluate everything on case by case basis instead of generalize. So in general it wouldn’t be fair let men and women ... (read more)

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

In the past, the arguments against sexism and racism were things like "they're human too", "they can write poetry too", "God made all men equal" and "look how good they are at being governesses". None of these apply t... (read more)

Where was the argument for that? Non-humans attaining rights by a different path does not erase all other paths.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

This objection doesn't work if you rigidify over the beings you feel sympathy toward in the actual world, given your present mental capacities. And ... (read more)

I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.

Why should there be a normative ethics at all? What part of rationality requires normative ethics?

I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it's there. So, nevermind cows and pigs, if push came to shove I'll protect my friends and family in pre... (read more)

Why should I believe what humans have been selected for? Why would I want to keep "us" alive? I think those two questions are at least as begging as the reasons for my view, if not more. What I know for sure is that I dislike my own suffering, not because I'm sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else. All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven't thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately. I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists? And how do you do that? If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don't tell yourself that you're being a decent person only because of pure self-interest, you'd be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not "break the rules" is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught. You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you
I don't know, and I feel it's important that I admit that. My code of conduct is incomplete. It's better that it be clearly incomplete than have the illusion of completeness created by me deciding what a hypothetical me in a hypothetical situation ought to want. It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don't currently do this (me included) are apparently already being compensated sufficiently, however much that is.
I appreciate the honest reply! Perhaps you are setting the demands too high. I think the button scenario is relevantly different in the amount of sacrifice/inconvenience it requires. Making all-things-concerned ethical purchases is a lot more difficult than resisting the temptation of ten dollars (although the difference does become smaller the more you press it in some given timescale). Maybe this is something you view as "cheating" or a rationalization of cognitive dissonance as you explain in the other comment, but I genuinely think that a highly altruistic life may still involve making lots of imperfect choices. The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption. Being an altruist makes you your own most important resource. Preventing loss of motivation or burnout is then a legitimate concern that warrants keeping a suitable amount of self-interested comfort. And it is also worth noting that people differ individually in how easily altruism comes to them. Some may simply enjoy doing it or may enjoy the signalling aspects, while others might have trouble motivating themselves or even be uncomfortable with talking to others about ethics. One's social circle is also a huge influence. These are all things to take into account; it would be unreasonable to compare yourself to a utility-maximizing robot. Obviously this needn't be an all-or-nothing kind of thing. Pushing the button just once a week is already much better than never pushing it.
That's a testable assertion. How confident are you that you would follow the path of self consistency if upon being tested the assertion turned out to be false? Someone who chooses pragmatism only needs to fight their own ignorance to be self consistent while someone who does not has to fight both their own ignorance and all too often their own pragmatism in order to be slf-consistent.
Yes, it's testable and the estimates so far strongly support my claim. (I'm constantly on the lookout for data of this kind to improve my effectiveness.) I wouldn't have trouble adjusting because I'm already trying to reduce my unethical consumption through habit forming (which basically comes down to being vegan and avoiding expensive stuff). Even if its not very effective compared to other things, as long as I don't have opportunity costs, it is still something positive. I'm just saying that even for people who won't, for whatever reasons, make changes to the kind of stuff they buy, these people could still reduce a lot of suffering by donating to the most effective cause.
I wonder if pragmatists are less likely to reject information they don't want to hear since their self interest is their terminal goal, so for example entertaining the possibility that Malthus can be right in some instances does not imply that they must unilaterally sacrifice themselves. Perhaps the reason so many transhumanists are peak oil deniers and global warming deniers is that both of these are Malthusian scenarios that would put the immediate needs of those less fortunate in direct and obvious opposition to the costly, delayed-payoff projects we advocate.
Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it's not really cheating or working ever harder to suppress their temptations. I've been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I'm a bastard. Now instead of agonizing whether my right to FOO outweighs Bob's right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR. I still cheat, but I don't waste time on moral posturing. I try to treat it as a sign that perhaps I still don't fully understand my own utility function. Imagine how far off the mark I'd be if I was simultaneously trying to optimize Bob's!
Nonhuman animals are integrated with human "monkey spheres" - e.g. people live with their pets, bond with them and give them names. A second mistake is that you decry normative ethics, only to implicitly establish a norm in the next paragraph as if it were a fact: Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc. By prescribing to a monkey-sphere that "everyone" has and that doesn't include nonhuman animals, you are effectively telling us what we should care about, not what we actually care about. Even if you don't care about animal welfare, the fact that others do has an influence on your "monkey-sphere", even if it's weak. Btw, aren't humans apes rather than monkeys?
The term "monkeysphere", which is a nickname for Dunbar's Number, originates from this article. The term relates not only to the studies done on monkeys (and apes), but also the idea of there existing a limit on the number of named, cutely dressed monkeys about which a hypothetical person could really care.
Yes, precisely. Thanks for finding the link. Although I think of mine as a density function rather than a fixed number. Everyone has a little bit of my monkey-sphere associated with them. hug
Oh yeah, absolutely. I trust my friend's judgment how much members of her monkeysphere are worth to her, and utility to my friend is weighed against utility to others in my monkeysphere proportional to how close they are to me. My monkeysphere has long tails extending by default to all members of my species whose interests are not at odds with my own or those closer to me in the monkeysphere. Since I would be willing to use force against a human to defend myself or others at the core of my monkeysphere, it seems that I should be even more willing to use force against such a human and save the lives of several cattle in the process. Cults are well-funded too. I don't dispute that people care about both them and animal rights. What I dispute is whether supporting either of them offers enough benefits to the supporter that I would consider it a rational choice to make.

For selfish reasons, if I had a say in policy I would want to influence the world greatly against this. Whether true or not, I could easily get a disease in the future or go senile (actually quite likely) to such an extent that my moral worth in this system is reduced greatly. Since I still want to be looked after when that happens, I would never support this.

This doesn't refute any of the arguments, but for those who have some percentage chance of losing a lot of brain capacity in the future without outright dying (i.e probably most of us) it may be a reason to argue against this idea anyway.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it.

If there were no intrinsic reasons for a feather to fall slower than a rock, then in a vacuum a feather would fall just as fast as a rock as long as there's no air. But you don't neglect the viscosity of air when designing a parachute.

Here's an argument for something that might be called speciesism. though it isn't strictly speciesism because moral consideration could be extended to hypothetical non-human beings (though no currently known ones) and not quite to all humans - contractarianism. We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it. When these criteria are fulfilled, a being has rights and should not be harmed, but otherwise, we have no reason to restrict ourselves in our dealings with it.

Indeed, consistently applied, this view would deny rights to both non-human animals and some human individuals, so it wouldn't be speciesist. There is however another problem with contractarianism: I think the way it is usually presented is blatantly not thought through and non sequitur. What do you mean by "we have reason"? If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn't mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest. So there seems to be a (crucial!) step missing here. And if all one is arguing for is to "do whatever is in your rational self-interest", why give it a misleading name like contractarianism? There is always the option to say "I don't care about others". Apart from the ingenuous argument about personal identity that implies that your own future selves should also classify among "others", there is not much one can say to such a person. Such a person would refuse to act along with the outcome specified by the axiome of impartiality/altruism in the ethics game. You may play the ethics game intellectually and come to the conclusion that systematized altruism implies some variety of utilitarianism (and then define more terms and hash out details), but you can still choose to implement another utility function in your own actions. The two dimensions are separate, I think.
True, but it would be in their rational self-interest to retaliate if their rights aren't being respected, to create a credible threat so their rights would be respected. It's not a misleading name, it means that morality is based on contracts. It's more specific than "do whatever in your rational self-interest", as it suggests something that someone who is following their self-interest should do. Also, not everyone who advocates following one's rational self-interest is a contractarian.
You'd need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level... But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have "rights". There are two plausible cases I can see here: 1) The suggestions collides with "do whatever is in your rational self-interest"; in which case it was misleading. 2) The suggestions deductively follows from "do whatever is in your rational self-interest"; in which case it is uninteresting (and misleading because it dresses up as some fancy claim). You seem to mean: 3) The suggestions adds something of interest to "do whatever is in your rational self-interest"; here I don't see where this further claim would/could come from. What do you mean by "morality"? Unless you rigorously define such controversial and differently used terms at every step, you're likely to get caught up in equivocations. Here are two plausible interpretations for "morality" in the partial sentence I quoted I can come up with: 1) people's desire to (sometimes) care about the interests of others / them following that desire 2) people's (system two) reasoning for why they end up doing nice/fair things to others Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have "moral biases", e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality. In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of "morality" according to the above. So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier: Perhaps you are claiming that people are s
This makes specific what part of "acting in your rational self-interest" means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it's adding the claim that if you've decided to follow your rational self-interest, this is how you should act. What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don't have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing - it's sort of like agreeing that the moon exists, even if they don't agree what it's made of. Morality is not synonymous with "caring about the interests of others", nor does it even necessarily imply that (in the ethical-theory-neutral view I'm taking in this paragraph). Morality is what you should do, even if you think you should do something else. As for your second-to-last paragraph (the one not in parentheses) - Being an ethical egoist, I do think that people are irrational if they don't act in their self-interest. I agree that we can't have irrational goals, but we aren't free to set whatever goals we want - due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it's still the only rational goal. I can go into it more, but I think it's outside the scope of this thread.

If some means could be found to estimate phi for various species, a variable claimed by this paper to be a measure of "intensity of sentience", it would the relative value of the lives of different animals to be estimated and would help solve many moral dilemmas. Intensity of suffering as a result of a particular action would be expected to be proportionate to the intensity of sentience, however whilst mammals and birds (the groups which possess neocortex, the parts of the brain where consciousness is believed to occur) can be assumed to experien... (read more)


Many arguments here seem to take the mindkilling form of "If we had to derive our entire system of moral value based on explicitly stated arguments, and follow those arguments ad absurdum, bad thing results."

Since bad thing is bad, and you say it is in some situation justified, clearly you are wrong, with the (reasonably explicit) accusation that if you use this line of reasoning you are (sexist! racist! in favor of killing babies! in favor of genocide! or worse, not being properly rational!)

That's common practice in ethics. You need something to work with otherwise ethical reasoning couldn't get off the ground. But it doesn't necessarily imply that people are not being properly rational (irrational would have to be defined according to a goal, and ethics is about goals.)
One, do you believe that those five links also take a similarly mindkilling form and that mindkilling is justified because it is standard practice in ethics? If this is true, does the fact that it is standard practice justify it, and if so what determines what is and isn't justified by an appeal to standard practice? Refuting counter-argument X by saying that if X was your full set of ethical principles you would reach repugnant conclusion Y is at its strongest an argument that X is not a complete and fully satisfactory set of ethical principles. I fail to see how it can be a strong argument that X is invalid as a subset of ethical principles, which is how it appears to have been used above. In addition, when we use an argument of the form "X leads to some conclusion Y for which Y can be considered a subset of Z, and all Z are bad" we imply that for all such Z, you can (even in theory) create an internally consistent ethical system, even in theory, where for any given principle set P such that P is under some circumstance leading to an action in some such set Z, P is wrong. I would claim that if you include all your examples of such Z, it is fairly easy to construct situations such that the sets Z contain all possible actions and thus all ethical systems P, which would imply no such ethical systems can exist, and if you well-define all your terms, I would be happy to attempt to construct such a scenario.
I don't think this form of argument is mindkilling. "Bad thing" needs to refer to something the person whose position you're criticizing considers unacceptable too. You'd be working with their own intuitions and assumptions. So I'm not advocating begging the question by postulating that some things are bad tout court (that would be mindkilling indeed). The first one is just a description of the most common ethical methodology. The other papers I'm linking too are excellent, with the exception of the third one which I do consider to be rather weak. But these are all great papers that use the procedure I quoted from you. This doesn't necessarily follow, but if I discover that the set of principles I endorse lead to conclusions I definitely do not endorse, then I have reasons to fundamentally question some of the original principles. I could also go for modifications that leave the overall construct intact, but that usually comes with problems as well. I'm not sure whether I understand your last paragraph. It seems like you're talking about impossibility theorems. This has indeed been done, for instance for population ethics (the second paper I linked to above). There are two ways to react to this: 1) Giving up, or 2) reconsidering which conclusions go under Z. Personally I think the second option makes more sense.

The claim is that there is no way to block this conclusion without:

  1. using reasoning that could analogically be used to justify racism or sexism or
  2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

But, on the other side, there's no way to reinforce the argument to prevent it from going the other extreme: what negates the interpretation of an amoeba retracting from a probe to call it "pain"? It is just the anatomical qual... (read more)

I edited the very end of my post to account for this. I think the question whether a given organism is sentient is an empirical question, i.e. one that we can unambiguously figure out with enough knowledge and computing power. Some people do disagree with that and in this case, things would become more complicated

Hmm, maybe I didn't read the argument carefully enough, but it seems that the argument from marginal cases proves too much ., non-US citizens should be allowed to serve in the army, some people without medical licenses should be allowed to practice as surgeons and many more things.

This would be mixing up the normative level with the empirical level. The argument from marginal cases seeks to establish that we have reasons against treating beings of different species differently, all else being equal. Under consequentialism, the best path of action (including motives, laws, societal norms to promote and so on) would already be specified. It would be misleading to apply the same basic moral reasoning again on the empirical level where we have institutions like the US army or the establishment of surgeons. Institutions like the US army are (for most people anyway and outside of political philosophy) not terminal values. Whether it increases overall utility if we enforce "non-discrimination" radically in all domains is an empirical question determined by the higher order goal of achieving as much utility as possible. And whenever this is not the case (which it may well be, since there is no reason to assume that the empirical level perfectly mirrors the normative one), then "all else" is not equal. Because it might not be overall beneficial for society / in terms of your terminal values, it could be a bad idea to allow an otherwise well-qualified someone without a medical license to practice as a surgeon. There might be negative side-effects of such a practice. A practical example of this would be animal testing. If enough people were consequentialists and unbiased, we could experiment on humans and thereby accelerate scientific progress. However, if you try to do this in the real world, there is the danger that it will go wrong because people lose track of altruistic goals and replace them with other things (altough this argument applies to animal testing as well almost as much), and there is a big likelihood of starting a civil war or worse if someone would actually start experimenting on humans (this one doesn't). So even though experimenting on animals is intrinsically on par with experimenting on humans with similar cognitive capacities, on
Thank you for the response, I think I get the argument now. I don't have a good answer for why we allow animal testing but not human testing. If one is fine with animal experimentation then there doesn't seem to be any way to object to engineering human babies that would have human physiology but animal level cognition and conduct tests on them. While the idea does make me uncomfortable I think I would bite that bullet.
The problem is that it makes the Schelling points more awkward.
The argument from marginal cases may well prove too much, but this strikes me as a failed counter-example. Using non-citizens as part of a military force is a reasonably standard practice. Depending on the circumstances it can be the smart thing to do. (Conscripting citizens as cannon fodder tends to promote civil unrest.)
Sure, I shouldn't have used the US military as an example - I retract it. Trying again the argument from marginal cases proves that some 12 year olds should be allowed to vote.
This slippery scope really isn't sounding all that bad...
... what makes you think that's wrong? I remember being twelve, seems to me basing that sort of thing on numerical age is fairly daft, albeit relatively simple.
Indeed, I wouldn't object to this directly. One could however argue that it is bad for indirect reasons. It would acquire huge administrative efforts to test teens for their competence at voting, and the money and resources might be better spent on education or the US army (jk). In order to save administrative costs, using a Schelling point at the age of, say, 18, makes perfect sense, even though there certainly is no magical change taking place in people's brains the night of their 18th birthday.
(You meant require, not acquire) It would also require huge administrative efforts to test 18-year-olds for competence. So we simply don't, and let them vote anyway. It's not clear to me that letting all 12-year-olds vote is so much terribly worse. They mostly differ from adults on age-relevant issues: they would probably vote school children more rights. It may or may not be somewhat worse than the status quo, but (for comparison) we don't take away the vote from all convicted criminals, or all demented people, or all people with IQ below 60... Not giving teenagers civil rights is just a historical fact, like sexism and racism. It doesn't have a moral rationale, only rationalizations.
A randomly chosen 18-year-old is more likely than a randomly chosen 12-year-old to be ready to vote -- though I agree that age isn't necessarily the best cheap proxy for that. (What about possession of a high-school diploma?) Many would argue we should.
That's the same problem under a different name. What does "ready to vote" mean? That excludes some people of all ages, but it still also excludes all people younger than 16-17 or so. You get a high school diploma more for X years of attendance than for any particular exam scores. There's no way for HJPEV to get one until he's old enough to have spent enough time in a high school. We should be clear on what we're trying to optimize. If it's "voting for the right people", then it would be best to restrict voting rights to a very few people who know who would be right - myself and enough friends whom I trust to introduce the necessary diversity and make sure we don't overlook anything. If on the other hand it's a moral ideal of letting everyone ruled by a government, give their consent to the government - then we should give the vote to anyone capable of informed consent, which surely includes people much younger than 18.
Yes, that would probably have better results, but mine is a better Schelling point, and hence more likely to be achieved in practice, short of a coup d'état. :-)
I think it works out better if you ignore your own political affiliations, which makes sense because mindkilling.
Even ignoring affiliations, if I really believe I can make better voting choices than the average vote of minority X, then optimizing purely for voting outcomes means not giving the vote to minority X. And there are in fact minorities where almost all of the majority believes this, such as, indeed, children. (I do not believe this with respect to children, but I believe that most other adults do.)
Ah, but everyone thinks they know better ... or something ... I dunno :p
That's just like saying "never act on your beliefs because you might be wrong".
To be fair, that's truer in politics than, say, physics.
Well, you want larger margins of error when setting up a near-singleton than while using it, because if you set it up correctly then it'll hopefully catch your errors when attempting to use it. Case in point: FAI. EDIT: If someone is downvoting this whole discussion, could they comment with the issue? Because I really have no idea why so I can't adjust my behaviour.
12 year olds are also highly influenced by their parents. It's easy for a parent to threaten a kid to make him vote one way, or bribe him, or just force him to stay in the house on election day if he ever lets his political views slip out. (In theory, a kid could lie in the first two scenarios, since voting is done in secret, but I would bet that a statistically significant portion of kids will be unable to lie well enough to pull it off.) Also, 12 year olds are less mature than 18 year olds. It may be that the level of immaturity in voters you'll get from adding people ages 12-17 is just too large to be acceptable. (Exercise for the reader: why is 'well, some 18 year olds are immature anyway' not a good response?) And taking away the vote from demented people and people with low IQ has the problem that the tests may not be perfect. Imagine a test that is slightly biased and unfairly tests black people at 5 points lower IQ. So white people get to vote down to IQ 60 but black people get to vote down to IQ 65. Even though each individual black person of IQ 65 is still pretty stupid, allowing a greater proportion of stupid people from one race than another to vote is bad.
"Maturity" isn't obviously a desirable thing. What people tend to describe as 'maturity' seems to be a developed ability to signal conformity and if anything is negative causal influence on the application of reasoned judgement. People learn that it is 'mature' to not ask (or even think to ask) questions about why the cherished beliefs are obviously self-contradicting nonsense, for example. I do not expect a country that allows 12-17 year olds to vote to have worse outcomes than a country that does not. Particularly given that it would almost certainly result in more voting-relevant education being given to children and so slightly less ignorance even among adults.
I might be a little more generous than that. The term casts a pretty broad net, but it also includes some factors I'd consider instrumentally advantageous, like self-control and emotional resilience. I'm not sure how relevant those are in this context, though.
I certainly recommend maturity. I also note that the aforementioned signalling skill is also significantly instrumentally advantageous. I just don't expect the immaturity of younger voters to result in significantly worse voting outcomes.
"Maturity" is pretty much a stand-in for "desirable characteristics that adults usually have and children usually don't," so it's almost by definition an argument in favor of adults. But to be fair, characteristics like the willingness to sit through/read boring informational pieces in order to be a more educated voter, the ability to accurately detect deception and false promises, and the ability to use past evidence to determine what is likely to actually happen (as opposed to what people say will happen) are useful traits and are much more common in 18-year-olds than 12-year-olds.
Interesting argument, I had never thought of that. I'm still sceptical about what the quality of such voting-relevant education would be. On timescales much longer than politicians usually think about.
In my experience "voting-relevant education" tends to mean indoctrination, so no.
Or sometimes "economics" and "critical thinking.
That's a trick statement, because the biggest reason that a country that allows 12-17 year olds to vote won't have worse outcomes is that the number of such people voting isn't enough to have much of an influence on the outcome at all. I don;t expect a country that adds a few hundred votes chosen by throwing darts at ballots to have worse outcomes, either. The proper question is whether you expect a country that allows them to vote to have worse outcomes to the extent that letting them vote affects the outcome at all.
In the US there are about 25m 12-17-year-olds. In the last (2012) presidential election the popular vote gap between the two candidates was 5m people.
There is no trick. For it to be a trick of the kind you suggest would require that the meaning people take from it is different from the meaning I intend to convey. I do not limit the claim to "statistically insignificant worse outcomes because the 25 million people added are somehow negligible". I mean it like it sounds. I have not particular expectation that the marginal change to the system will be in the negative direction.
And 75-year-olds are highly influenced by their children. (And 22-year-olds are highly influenced by their friends, for that matter.) (I'm not saying we should allow 12-year-olds to vote, but just that I don't find that particular argument convincing.)
I don't find arguments against letting children vote very convincing either, except the argument that 18 is a defensible Schelling point and it would become way too vulnerable to abuse if we changed it to a more complicated criterion like "anyone who can give informed consent, as measured by X." After all, if we accept the argument that 12-17 year olds should vote (and I'm not saying it's a bad argument), then the simplest and most effective way to enforce that is to draw another arbitrary line based on age, at some lower age. Anything more complex would again be politicized and gamed. But I think you're misrepresenting the "influenced by parents" argument. 22-year-olds are influenced by their friends, yes, but they influence their friends to roughly the same degree. Their friends do not have total power over their life, from basic survival to sources of information. A physical/emotional threat from a friend is a lot less credible than a threat from your parents, especially considering most people have more than one circle of friends. The same goes for the 75-year-old - they may be frail and physically dependent on their children, but society doesn't condone a live-in grandparent being bossed around and controlled the way a live-in child is, so that is not as big a concern.
Indeed, we outsource the job to nursing homes instead.
You know, I can think of a worse test than that ... eh, I'm not even going to bother working out a complex "age test" metaphor, I'm just gonna say it: age is a worse criterion than that test.
You might be able to argue that since people of different races don't live to the exact same age, an age test is still biased, but I'd like to see some calculations to show just how bad it is. Also, even though an age test may be racially biased, there aren't really better and worse age tests--it's easy to get (either by negligence or by malice) an IQ test which is biased by multiple times the amount of a similar but better IQ test, but pretty much impossible to get that for age. There's also the historical record to consider. It's particularly bad for IQ tests.
No, sorry, I mean it's worse overall, not worse because racist.
It's not hard to come up with a scenario where having all voters be incompetents who choose the candidate at random is better for the population at large than just holding a racially biased election. For instance, consider 100 people, 90 white and 10 black; candidate A is best for 46 whites and 0 blacks while candidate B is best for 44 whites and 10 blacks. For the population as a whole, B is the best and A is the worst. If the blacks are excluded from the franchise and the whites vote their own interests, the worst candidate (A) is always elected, while if everyone is incompetent and votes at random, there's only a 50% chance of the worst candidate being elected
You realize there's more to politics than race, right? That said, you would definitely have to be careful to ensure the test was as good as possible.
Although there's more to politics than race, race is an important part of it, and we're obligated to treat other people fairly with respect to race. The argument that it doesn't matter how racially biased a test is because it's good in other ways isn't something I am inclined to accept.
I assume this is hyperbole, since obviously a truly perfect test could draw from any subset of the population, as long as it was large enough to contain near-perfect individuals. With that said, I agree, we should attempt to avoid any bias in such a test, including that of race (I would not, however, single this possibility out.) That is what I meant by However, beyond a certain level of conscientiousness, demanding perfectly unbiased tests becomes counterproductive; especially when one focuses on one possible bias to the exclusion of others. In truth, even age is a racially biased criterion.
Do you define racial bias by how the test works or by which outcomes it produces?
In context, MugaSofer had claimed that if a test that allows young people to vote based on IQ tests black people of equal intelligence as 5 points lower IQ, that's okay because an age test is worse than that. I was, therefore, referring to that kind of bias. I'm not sure whether you would call "gives a number 5 points lower for black people of equal intelligence" 'how the test works' or 'which outcomes it produces'.
In this context, MugaSofer's test is clearly "how it works" because the test explicitly looks at the color of skin and subtracts 5 from the score if the skin is dark enough. On the other hand, "which outcomes it produces" is the more or less standard racial bias test applied by government agencies to all kinds of businesses and organizations.
I didn't describe a test which looks at the color of skin and subtracts 5; I described a test which produces results 5 points lower for people with a certain color of skin. Whether it does that by looking at the color of skin explicitly, or by being an imperfect measure of intelligence where the imperfection is correlated to skin color, I didn't specify, and I was in fact thinking of the latter case.
These are two rather different things. I am not sure how the latter case works -- if the test is blinded to the skin color but you believe it discriminates against blacks, (1) How do you know the "true" IQ which the test understates; and (2) what is it, then, that the test picks up as a proxy or correlate to the skin color? Standard IQ tests show dependency on race -- generally the mean IQ of blacks is about one standard deviation below the mean IQ of whites.
In my experience, if someone is claiming that a test is racially biased, they are claiming that properly understanding the question requires cultural context which is more or less common in one race than another. An example I found here is a multiple-choice question which asks the student to select the pair of words with a relationship similar to the relationship between a runner and a marathon. The correct answer there was "oarsman" and "regatta". Clearly, there was a cultural context required to correctly answer this question; examining the correlations between socioeconomic status and race, I would expect to find that the cultural context is more common among rich caucasians.
In my experience if someone is claiming that a test is racially biased, they just don't like the test results. Not always, of course, but often enough. Then the fact that East Asian people show mean IQ noticeably higher than that of caucasians would be a bit inconvenient, wouldn't it?
I'd like to quote you twice: and What exactly do you mean by "often enough"? Do you mean to say that there is such a large number of false positives in claims of racial bias that none of them should be investigated? I am confused by your dismissal of this phenomenon. Regarding the fact that East Asians tend to score higher than Caucasians on IQ tests (I am familiar with this difference in the US; I do not know if it applies to comparison between East Asian and majority-Caucasian countries), I would attribute it to culture and self-selection. In the case of the United States, it is my understanding that immigration from Europe dominated immigration to the US during the Industrial Revolution - when the US was looking for, and presumably attracting, manual laborers - while recently, immigrants from Asia have made up a far larger share of the total immigrants to the US. I would guess that relative to European-Americans*, Asian-Americans' immigrant ancestors are more likely to have self-selected for the ability to compete in an intelligence-based trade. This selection bias, propagating through to descendants (intelligent people tend to have intelligent children), would seem to at least partially explain why Asian-Americans score higher. I do not have any information on Caucasians in their ancestral homelands vs. East Asians in their ancestral homelands. *Based on recollection of stories told to me and verified only by a quick check online, so if others could chime in with supporting/opposing evidence, that would be appreciated.
I mean that a large number of different studies over several decades using different methodologies in various countries came up with the same results: the average IQ of people belonging to different gene pools (some of which match the usual idea of race and some do not) is not the same. That finding happens to be ideologically or morally unacceptable to a large number of people. Normally they just ignore it, but when when they have to confront it the typical reaction -- one that happens "often enough" -- is denial: the test is racially biased and so invalid. Example: you. I do not believe I have said anything even remotely resembling this. Yes, it does apply. Before you commit to defending a position, it's useful to do a quick check to see whether it's defensible. You think no one ran any IQ studies in China?
Thank you for clarifying your points. I mistakenly interpreted "often enough" as indicating some threshold of frequency of false positives beyond which it would not be appropriate to take the problem seriously. I apologize for arguing a straw man. I was considering mostly the difference among people of different races in the United States, as I assumed that would minimize the effects of cultural difference (though not eliminate it) on the intelligence of the participants and their test results. I would anticipate that cultural influences do affect a person's intelligence - the hypothetical quality which we imperfectly measure, not the impact that quality leaves on a test - as it can motivate certain avenues of self-improvement through its values, or simply allow access to different resources. I am not surprised that there are IQ differences among racial groups. In fact, I would be shocked to learn that every culture and every natural environment and every historical happening in the entirety of human civilization happened to produce the exact same level of average intelligence. I would be surprised, but not shocked, to learn that there existed a strong, direct causation between race (as a genetic difference rather than a social phenomenon) and intelligence. I did not mean to imply that because a test outputs different results for different racial groups, that it must be biased. I merely meant to say that bias can exist, though I am not certain whether or not it does, or to what degree. All in all, I seem to have made rather a fool of myself, jumping at shadows, and for that I am sorry.
2Said Achmiz
I've never seen any question resembling this on any IQ test I've ever taken. Have you? (Note that your link refers to the SAT I, which is not an IQ test.) Is anyone claiming that the WAIS, for instance, is culturally biased in a similar way?
What's your counter-argument?
It's not an argument, it's a premise. Feel free to propose that in fact it doesn't matter how racially biased a test is because it's good in other ways. I don't know how many people will agree with you, though.
You said you weren't willing to accept the argument. Do you have any better reason than "I don't feel like it"?