There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over a year ago comes closest to what I have in mind, but I want to focus on some of the issues in more detail.

A while back, I read the following comment in a LessWrong discussion on uploads:

I do not at all understand this PETA-like obsession with ethical treatment of bits.

Aside from (carbon-based) humans, which other beings deserve moral consideration? Nonhuman animals? Intelligent aliens? Uploads? Nothing else?

This article is intended to shed light on these questions; it is however not the intent of this post to advocate a specific ethical framework. Instead, I'll try to show that some ethical principles held by a lot of people are inconsistent with some of their other attitudes -- an argument that doesn't rely on ethics being universal or objective. 

More precisely, I will develop the arguments behind anti-speciesism (and the rejection of analogous forms of discrimination, such as discrimination against uploads) to point out common inconsistencies in some people's values. This will also provide an illustrative example of how coherentist ethical reasoning can be applied to shared intuitions. If there are no shared intuitions, ethical discourse will likely be unfruitful, so it is likely that not everyone will draw the same conclusions from the arguments here. 

 

What Is Speciesism?

Speciesism, a term popularized (but not coined) by the philosopher Peter Singer, is meant to be analogous to sexism or racism. It refers to a discriminatory attitude against a being where less ethical consideration i.e. caring less about a being's welfare or interests is given solely because of the "wrong" species membership. The "solely" here is crucial, and it's misunderstood often enough to warrant the redundant emphasis.

For instance, it is not speciesist to deny pigs the right to vote, just like it is not sexist to deny men the right to have an abortion performed on their body. Treating beings of different species differently is not speciesist if there are relevant criteria for doing so. 

Singer summarized his case against speciesism in this essay. The argument that does most of the work is often referred to as the argument from marginal cases. A perhaps less anthropocentric, more fitting name would be argument from species overlap, as some philosophers (e.g. Oscar Horta) have pointed out. 

The argument boils down to the question of choosing relevant criteria for moral concern. What properties do human beings possess that makes us think that it is wrong to torture them? Or to kill them? (Note that these are two different questions.) The argument from species overlap points out that all the typical or plausible suggestions for relevant criteria apply equally to dogs, pigs or chickens as they do to human infants or late-stage Alzheimer patients. Therefore, giving less ethical consideration to the former would be based merely on species membership, which is just as arbitrary as choosing race or sex as relevant criterion (further justification for that claim follows below).

Here are some examples for commonly suggested criteria. Those who want may pause at this point and think about the criteria they consult for whether it is wrong to inflict suffering on a being (and separately, those that are relevant for the wrongness of killing).

 

The suggestions are:

A: Capacity for moral reasoning

B: Being able to reciprocate

C: (Human-like) intelligence

D: Self-awareness

E: Future-related preferences; future plans

E': Preferences / interests (in general)

F: Sentience (capacity for suffering and happiness)

G: Life / biological complexity

H: What I care about / feel sympathy or loyalty towards

 

The argument from species overlap points out that not all humans are equal. The sentiment behind "all humans are equal" is not that they are literally equal, but that equal interests/capacities deserve equal consideration. None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.

While H is an unlikely criterion for direct ethical consideration (it could justify genocide in specific circumstances!), it is an important indirect factor. Most humans have much more empathy for fellow humans than for nonhuman animals. While this is not a criterion for giving humans more ethical consideration per se, it is nevertheless a factor that strongly influences ethical decision-making in real-life.

However, such factors can't apply for ethical reasoning at a theoretical/normative level, where all the relevant variables are looked at in isolation in order to come up with a consistent ethical framework that covers all possible cases.

If there were no intrinsic reasons for giving moral consideration to babies, then a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it. If we consider this implication to be unacceptable, then the same must apply for the situations nonhuman animals find themselves in on farms.

Side note: The question whether killing a given being is wrong, and if so, "why" and "how wrong exactly", is complex and outside the scope of this article. Instead of on killing, the focus will be on suffering, and by suffering I mean something like wanting to get out of one's current conscious state, or wanting to change some aspect about it. The empirical issue of which beings are capable of suffering is a different matter that I will (only briefly) discuss below. So in this context, giving a being moral consideration means that we don't want it to suffer, leaving open the question whether killing it painlessly is bad/neutral/good or prohibited/permissible/obligatory. 

The main conclusion so far is that if we care about all the suffering of members of the human species, and if we reject question-begging reasoning that could also be used to justify racism or other forms of discrimination, then we must also care fully about suffering happening in nonhuman animals. This would imply that x amount of suffering is just as bad, i.e. that we care about it just as much, in nonhuman animals as in humans, or in aliens or in uploads. (Though admittedly the latter wouldn't be anti-speciesist but rather anti-"substratist", or anti-"fleshist".)

The claim is that there is no way to block this conclusion without:

1. using reasoning that could analogically be used to justify racism or sexism
or
2. using reasoning that allows for hypothetical circumstances where it would be okay (or even called for) to torture babies in cases where utilitarian calculations prohibit it.

I've tried and have asked others to try -- without success. 

 

Caring about suffering

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past. 

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb). However, I don't see why absurd conclusions that will likely remain hypothetical would be significantly less bad than other absurd conclusions. Their mere possibility undermines the whole foundation one's decisional algorithm is grounded in. (Compare hypothetical problems for specific decision theories.) 

Furthermore, while D and E seem plausible candidates for reasons against killing a being with these properties (E is in fact Peter Singer's view on the matter), none of the criteria from A to E seem relevant to suffering, to whether a being can be harmed or benefitted. The case for these being bottom-up morally relevant criteria for the relevance of suffering (or happiness) is very weak, to say the least. 

Maybe that's the speciesist's central confusion, that the rationality/sapience of a being is somehow relevant for whether its suffering matters morally. Clearly, for us ourselves, this does not seem to be the case. If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it! 

Those who do consider biting the bullet should ask themselves whether they would have defended that view in all contexts, or whether they might be driven towards such a conclusion by a self-serving bias. There seems to be a strange and sudden increase in the frequency of people who are willing to claim that there is nothing intrinsically wrong with torturing babies when the subject is animal rights, or more specifically, the steak they intend to have for dinner.

It is an entirely different matter if people genuinely think that animals or human infants or late-stage demented people are not sentient. To be clear about what is meant by sentience: 

A sentient being is one for whom "it feels like something to be that being". 

I find it highly implausible that only self-aware or "sapient" beings are sentient, but if true, this would constitute a compelling reason against caring for at least most nonhuman animals, for the same reason that it would pointless to care about pebbles for the pebbles' sake. If all nonhumans truly weren't sentient, then obviously singling out humans for the sphere of moral concern would not be speciesist.

What irritates me, however, is that anyone advocating such a view should, it seems to me, still have to factor in a significant probability of being wrong, given that both philosophy of mind and the neuroscience that goes with it are hard and, as far as I'm aware, not quite settled yet. The issue matters because of the huge numbers of nonhuman animals at stake and because of the terrible conditions these beings live in. 

I rarely see this uncertainty acknowledged. If we imagine the torture-scenario outlined above, how confident would we really be that the torture "won't matter" if our own advanced cognitive capacities are temporarily suspended? 

 

Why species membership really is an absurd criterion

In the beginning of the article, I wrote that I'd get back to this for those not convinced. Some readers may still feel that there is something special about being a member of the human species. Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form. 

The following likely isn't news to most of the LW audience, but it is worth spelling it out anyway: There exists a continuum of "species" in thing-space as well as in the actual evolutionary timescale. The species boundaries seem obvious just because the intermediates kept evolving or went extinct. And even if that were not the case, we could imagine it. The theoretical possibility is enough to make the philosophical case, even though psychologically, actualities are more convincing.

We can imagine a continuous line-up of ancestors, always daughter and mother, from modern humans back to the common ancestor of humans and, say, cows, and then forward in time again to modern cows. How would we then divide this line up into distinct species? Morally significant lines would have to be drawn between mother and daughter, but that seems absurd! There are several different definitions of "species" used in biology. A common criterion -- for sexually reproducing organisms anyway -- is whether groups of beings (of different sex) can have fertile offspring together. If so, they belong to the same species. 

That is a rather odd way of determining whether one cares about the suffering of some hominid creature in the line-up of ancestors -- why should that for instance be relevant in regard to determining whether some instance of suffering matters to us? 

Moreover, is that really the terminal value of people who claim they only care about humans, or could it be that they would, upon reflection, revoke such statements?

And what about transhumanism? I remember that a couple of years ago, I thought I had found a decisive argument against human enhancement. I thought it would likely lead to speciation, and somehow the thought of that directly implied that posthumans would treat the remaining humans badly, and so the whole thing became immoral in my mind. Obviously this is absurd; there is nothing wrong with speciation per se, and if posthumans will be anti-speciesist, then the remaining humans would have nothing to fear! But given the speciesism in today's society, it is all too understandable that people would be concerned about this. If we imagine the huge extent to which a posthuman, or not to mention a strong AI, would be superior compared to current humans, isn't that a bit like comparing chickens to us?

A last possible objection I can think of: Suppose one held the belief that group averages are what matters, and that all members of the human species deserve equal protection because of the group average for a criterion that is considered relevant and that would, without the group average rule, deny moral consideration to some sentient humans. 

This defense too doesn't work. Aside from seeming suspiciously arbitrary, such a view would imply absurd conclusions. A thought experiment for illustration: A pig with a macro-mutation is born, she develops child-like intelligence and the ability to speak. Do we refuse to allow her to live unharmed -- or even let her go to school -- simply because she belongs to a group (defined presumably by snout shape, or DNA, or whatever the criteria for "pigness" are) with an average that is too low?

Or imagine you are the head of an architecture bureau and looking to hire a new aspiring architect. Is tossing out an application written by a brilliant woman going to increase the expected success of your firm, assuming that women are, on average, less skilled at spatial imagination than men? Surely not!

Moreover, taking group averages as our ethical criterion requires us to first define the relevant groups. Why even take species-groups instead of groups defined by skin color, weight or height? Why single out one property and not others? 

 

Summary

Our speciesism is an anthropocentric bias without any reasonable foundation. It would be completely arbitrary to give special consideration to a being simply because of its species membership. Doing so would lead to a number of implications that most people clearly reject. A strong case can be made that suffering is bad in virtue of being suffering, regardless of where it happens. If the suffering or deaths of nonhuman animals deserve no ethical consideration, then human beings with the same relevant properties (of which all plausible ones seem to come down to having similar levels of awareness) deserve no intrinsic ethical consideration either, barring speciesism. 

Assuming that we would feel uncomfortable giving justifications or criteria for our scope of ethical concern that can analogously be used to defend racism or sexism, those not willing to bite the bullet about torturing babies are forced by considerations of consistency to care about animal suffering just as much as they care about human suffering. 

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments. 

Edit: As Carl Shulman has pointed out, discounting may also apply for "intensity of sentience", because it seems at least plausible that shrimps (for instance), if they are sentient, can experience less suffering than e.g. a whale. 

Arguments Against Speciesism
New Comment
476 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I agree that species membership as such is irrelevant, although it is in practice an extremely powerful summary piece of information about a creature's capabilities, psychology, relationship with moral agents, ability to contribute to society, responsiveness in productivity to expected future conditions, etc.

Animal happiness is good, and animal pain is bad. However, the word anti-speciesism, and some of your discussion, suggests treating experience as binary and ignoring quantitative differences, e.g. here:

Such a view leaves room for probabilistic discounting in cases where we are empirically uncertain whether beings are capable of suffering, but we should be on the lookout for biases in our assessments.

This leaves out the idea of the quantity of experience. In human split-brain patients the hemispheres can experience and act quite independently without common knowledge or communication. Unless you think that the quantity of happiness or suffering doubles when the corpus callosum is cut, then happiness and pain can occur in substructures of brains, not just whole brains. And if intensive communication and coordination were enough to diminish moral value why does this not apply... (read more)

I fully agree with this point you make, I should have mentioned this. I think "probabilistic discounting" should refer to both "probability of being sentient" and "intensity of experiences given sentient". I'm not convinced that (relative) brain size makes a difference in this regard, but I certainly wouldn't rule it out, so this indeed factors in probabilistically and I don't consider this to be speciesist.

2Xodarap
Note that by this measure, ants are six times more important than humans. But to address your question: "specieism" is not a label that's slapped on people who disagree with you. It's merely a shorthand way of saying "many people have a cognitive bias that humans are more 'special' than they actually are, and this bias prevents them from updating their beliefs in light of new evidence." Brain-to-body quotient is one type of evidence we should consider, but it's not a great one. The encephalization quotient improves on it slightly by considering the non-linearity of body size, but there are many other metrics which are probably more relevant.
6CarlShulman
You linked to a page comparing brain-to-body-weight ratios, rather than any absolute features of the brain, and referring not to ants in general but to unusually miniaturized ants in which the rest of the body is shrunken. That seems pretty irrelevant. I was using total brain mass and neuron count, not brain-to-body-mass. I agree these are relevant evidence about quality of experience, and whether to attribute experience at all. But I would say that quality and quantity of experience are distinguishable (although the absence of experience implies quantity 0).
3Lumifer
This statement implies that humans can be more or less special "actually", as if it were a matter of fact, of objective reality. That is not true, however. Humans are special in the same way a roast is tasty or a host charming. It is entirely in the eye of the beholder, it's a subjective opinion and as such there is no "actually" about it. Your point is equivalent to saying "many people have a cognitive bias that roses are more 'pretty' than they actually are".
2Xodarap
As mentioned in the original post, the same can be said of race: I may subjectively prefer white people. You might bite the bullet here and say that yes, in fact, racism, sexism etc. is morally acceptable, but I think most people would agree that these __isms are wrong, and so speciesism must also be wrong.
7Lumifer
Yes. That's perfectly fine. In fact, if you examine the revealed preferences (e.g. who people prefer to have as their neighbours or who do they prefer to marry) you will see that most people in reality do prefer others of their own race. And, of course, the same can be said of sex, too. Unless you are an evenhanded bi, you're most certainly guilty of preferring some specific sex (or maybe gender, it varies). "Morally acceptable" is a judgement, it is conditional on which morality you're using as your standard. Different moralities will produce different moral acceptability for the same actions. Perhaps you wanted to say "socially acceptable"? In particular, "socially acceptable in contemporary US"? That, of course, is a very different thing. Sigh. This is a rationality forum, no? And you're using emotionally charged guilt-by-association arguments? (it's actually designed guilt-by-association since the word "speciesism" was explicitly coined to resemble "racism", etc.). Warning: HERE BE MIND-KILLERS!
3davidpearce
Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)
7NotInventedHere
I'm fairly sure it's for the examples referencing the politically charged issues of racism and sexism.
0wedrifid
It can be levelled at most people who use employ either of those terms.
-5Lumifer
1Xodarap
I apologize for presenting the argument in a way that's difficult to understand. Here are the facts: 1. If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that sexism, racism, etc. are acceptable 2. We* don't believe that sexism, racism, etc. are acceptable 3. Therefore, we cannot accept arguments based on subjective opinions Is there a better way to phrase this? (* "We" here means the broader LW community. I realize that you disagree, but I didn't know that at the time of writing.)
5Said Achmiz
Y'got some... logical problems going on, there. Firstly, your (1), while true, is misleading; it should read "If you believe that subjective opinions which are not based on evidence are morally acceptable, then you must believe that [long, LONG, probably literally infinite list of possible views, of which sexism and racism may be members but which contains innumerably more other stuff] are morally acceptable". Sure, accepting beliefs without evidence may lead us to sexism and/or racism, but that's hardly our biggest problem at that point. Secondly, you presuppose that sexism and racism are necessarily not based on evidence. Of course, you may say that sexism and racism are by definition not based on evidence, because if there's evidence, then it's not sexist/racist, but that would be one of those "37 Ways That Bad Stuff Can Happen" or what have you; most people, after all, do not use your definition of "sexist" or "racist"; the common definition takes no notice of whether there's evidence or not. Thirdly, for every modus ponens there is a modus tollens — and, as in this case, vice versa: we could decide that "subjective" opinions not based on evidence are morally acceptable (after all, we're not talking about empirical matters, right? These are moral positions). This, by your (1) and modus ponens, would lead us to accept sexism and racism. Intended? Or no? Finally — and this is the big one — it strikes me as fundamentally backwards to start from broad moral positions, and reason from them to a decision about whether we need evidence for our moral positions.
6Jiro
There's a bigger logical flaw: "belief that subjective opinions not based on evidence are acceptable" is an ambiguous English phrase. It can mean belief that: 1) if X is a subjective opinion, then X is acceptable. 2) there exists at least one X such that X is a subjective opinion and is acceptable Needless to say, the argument depends on it being #1, while most people who would say such a thing would mean #2. I believe that hairdryers are for sale at Wal-Mart. That doesn't mean that every hairdryer in existence is for sale at Wal-Mart.
2Said Achmiz
Yes, good point — the "some" vs. "all" distinction is being ignored.
0Xodarap
Good point, thank you. I have tried again here.
-2Xodarap
Thank you Said for your helpful comments. How is this: 1. Suppose we are considering whether being A is more morally valuable than being B. If we don't require evidence when making that decision, then lots of ridiculous conclusions are possible, including racism and sexism. 2. We don't want these ridiculous conclusions. 3. Therefore, when judging the moral worth of beings, the differentiation must be based on evidence. Regarding your "Finally" point - I was responding to Lumifer's statement: I agree that most people wouldn't take this position, so my argument is usually more confusing than helpful. But in this case it seemed relevant.
3Jiro
This has the same flaw as before, just phrased a little differently. "Suppose I am ordering a pizza. If we don't require it to be square, then all sorts of ridiculous possibilities are possible, such as a pizza a half inch wide and 20 feet long. We don't want these ridiculous possibilities, so we better make sure to always order square pizzas." "If we don't require evidence, then ridiculous conclusions are possible" can be interpreted in English to mean 1) In any case where we don't require evidence, ridiculous conclusions are possible. 2) In at least one case where we don't require evidence, ridiculous conclusions are possible. Most people who think that the statement is true would be agreeing with it in sense #2, just like with the pizzas. And your argument depends on sense #1. In other words, you're assuming that if evidence isn't used to rule out racism, then nothing else can rule out racism either.
-2Xodarap
Fair enough. What if we replace (1) with 1. If we allow subjective opinions, then ridiculous conclusions are possible. Keep in mind that I was responding to Lumifer's comment: This is not intended to be a grand, sweeping axiom of ethics. I was just pointing out that allowing these subjective opinions proves more than we probably want.
0Jiro
That still has the same flaw. If we allow any and all subjective opinions, then ridiculous conclusions are possible. But it doesn't follow that if we allow some subjective opinions, ridiculous conclusions are possible. And nobody's claiming the former.
2Lumifer
The issue isn't whether you require evidence. The issue is solely which moral yardstick are you using. The "evidence" is the application of that particular moral metric to beings A and B, but it seems to me you're should be more concerned with the metric itself. To give a crude and trivial example, if the metric is "Long noses are better than short noses" then the evidence is length of noses of A and B and on the basis of this evidence we declare the long-nose being A to be more valuable (conditional on this metric, of course) than the short-nose being B. I don't think you'll be happy with this outcome :-) Oh, and you are still starting with the predefined conclusion and then looking for ways to support it.
1solipsist
By the way, thank you for spelling out your position with a clear, valid argument that keeps the conversation moving forward. In the heat of argument we often forget to express our appreciation of well-posed comments.
1Vaniver
This is not a core belief of the broader LW community. An actual core belief of the LW community:
0wedrifid
I'm not sure that is quite true. It is controversial and many are not comfortable with it without caveats.
1Lumifer
You keep using that word. I do not think it means what you think it means. That's curious. My and your ideas of morality are radically different. There's even not that much of a common base. Let me start by re-expressing in my words how do I read your position (so that you could fix my misinterpretations). First, you're using "morally acceptable" without any qualifiers of conditionals. This means that you believe there is One True Morality, the Correct One, on the basis of which we can and should judge actions and opinions. Given your emphasis on "evidence", you also seem to believe that this One True Morality is objective, that is, can be derived from actual reality and proven by facts. Second, you divide subjective opinions into two classes: "not based on evidence" and, presumably, "based on evidence". Note that this is not at all the same thing as "falsifiable" vs. "non-falsifiable". For example, let's say I try two kinds of wine and declare that I like the second wine better. Is such a subjective opinion "based on evidence"? You also have major logic problems here (starting with the all/some issue), but it's a mess and I think other comments have addressed it. To contrast, I'll give a brief outline of how I view morality. I think of morality as a more or less coherent set of values at the core of which is a subset of moral axioms. These moral axioms are certainly not arbitrary -- many factors influence them, the three biggest ones are probably biology, societal/cultural influence, and individual upbringing and history -- but they are not falsifiable. You cannot prove them right or wrong. Evidence certainly matters, but it matters mostly at the interface of moral values and actions: evidence tells you whether the actual outcomes of your actions match your intent and your values. It is, of course, often the case that they do not. However evidence cannot tell you what you should want or what you should value. Heh. I neither believe you have the power to spea
0wedrifid
This does not follow. (It can be repaired by adding an "all" to the antecedent but then then the conclusion in '3' would not follow from 1 and 2.) Basically, no. Your argument is irredeemably flawed.
0wedrifid
This does not follow.
0Vaniver
The local explanation of this concept is the 2-place word, which I rather like.
-1MugaSofer
Well yes, yes it does. Even if "specialness" is defined purely within human neurology doesn't mean you can't apply it's criteria to parts of reality and be objectively right or wrong about the result - just like, say, numbers. Now, you could argue that humans vary with regards to how "special" humanity is to them, I suppose ... but in practice we seem to have a common cause, generally. Alternately, you could complain that paperclippers disagree about our "specialness" (or rather mean something different by the term, since their specialness algorithm returns high values for paperclips and low ones for humans and rocks), and is therefore insufficiently objective, but ...
3Lumifer
I disagree. Here is the relevant difference: if you're using "special" unconditionally, you're only expressing a fuzzy opinion which is just that, an opinion. To get to the level of facts you need to make your "special" conditional on some specific standard or metric and thus convert it into a measurement. It's still the same as saying that prettiness of roses is objective. Unconditionally, it's not. But if you want to, you can define 'prettiness' sufficiently precisely to make it a measurement and then you can objectively talk about prettiness of roses.
-3MugaSofer
Indeed. The difference being that humans don't all have the same prettiness-metrics, which is why the comparison fails.
5Lumifer
Humans all have the same specialness metrics?? I don't think so.
-4MugaSofer
Well, obviously some of them are biased in different directions ... but yeah, it looks to me like CEV coheres. EDIT: Unless I've completely misunderstood you somehow. Far from impossible.
0Armok_GoB
Brain size or number of neurons might work within a general group such as "mammals", however for example birds seem to be significantly smarter in some sense than a mammal of equivalently-sized brain, probably accounting for some difference in underlying architecture.
2Douglas_Knight
Do you have a specific bird and mammal in mind? Brain mass grows with body mass. It's so noisy that people can't decide whether it is the 2/3 or 3/4 power of body mass.* It is said that a mouse is as smart as a cow. What the cow is doing with all that gray matter, I don't know. Smart animals, like apes, dolphins, and ravens have bigger brains than the trend line, but the deviation is small, so they have smaller brains than larger animals. From this point of view, saying that birds are smart for their brain size is just saying that they are small. * probably the right answer is 3/4 and 2/3 is just promoted by people who found 3/4 inexplicable, but Geoffrey West says that denominators of 4 are OK.
0Armok_GoB
Well yea. Although i guess mammals tend to have bigger brain relative their bodies so you'd still expect the opposite?
2CarlShulman
Some of the relevant differences to look at are energy consumption, synapses, relative emphasis on different brain regions, selective pressure on different functions, sensory vs cognitive processing, neuron and nerve size (which affects speed and energy use), speed/firing rates. I'm just introducing the basic point here. Also see my other point about the distinction between intelligence and experience.
0A1987dM
I think there's a link not showing due to broken formatting.
0[anonymous]
Fixed.
0jefftk
How small a subsystem can experience pleasure or pain? If we developed configurations specifically for this purpose and sacrificed all the other things you normally want out of a brain we could likely get far more sentience per gram of neurons than you get with any existing brain. If someone built a "happy neuron farm" of these, would that be a good thing? Would a "sad neuron farm" be bad? EDIT: expanded this into a top level post.
3CarlShulman
I don't think that we should be confident that such things are all that matter (indeed, I think that's not true), or that the value is independent of features like complexity (a thermostat program vs an autonomous social robot). I would answer "yes" and "yes," especially in expected value terms.
0DanArmak
Isn't it better to consider brain-to-body mass ratios? A lion isn't 1.5 orders of magnitude smarter than a housecat. I wouldn't assume that quantity of experience is linear in the number of neurons.
3CarlShulman
Computer performance in chess (among many other things) scales logarithmically or worse with computer speeds/hardware. Humans with more time and larger collaborating groups also show diminishing returns. But if we're talking about reinforcement learning and sensory experience in themselves, we're not interested in the (sublinear) usefulness of scaling for intelligence, but the number of subsystems undergoing the morally relevant processes. Neurons are still a rough proxy for that (details of the balance of nervous system tissue between functions, energy supply, firing rates, and other issues would matter substantially), but should be far closer to linear.
[-]jefftk300

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering i... (read more)

Your view seems consistent. All I can say is that I don't understand why intelligence is relevant for whether you care about suffering. (I'm assuming that you think human infants can suffer, or at least don't rule it out completely, otherwise we would only have an empirical disagreement.)

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Me too. But we can control for memories by comparing the scenario I outlined with a scenario where you are first tortured (in your normal mental state) and then have the memory erased.

Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally?

You're right, it's not a big deal once you point it out. The interesting thing is that even a lot of secular people will at first (and sometimes even afterwards) bring arguments against the view that animals matter that don't stand the test of the argument of species overlap. It seems like they simply aren't thinking through all the implications of what they are saying, as if it isn't their true rejection. Having said that, there is always the option of biting the bullet, but many people who argue against caring about nonhumans don't actually want to do that.

8jefftk
I definitely think human infants can suffer, but I think their suffering is different from that of adult humans in an important way. See my response to Xodarap.
4atucker
Intelligence is relevant for the extent to which I expect alleviating suffering to have secondary positive effects. Since I expect most of the value of suffering alleviation to come through secondary effects on the far future, I care much more about human suffering than animal suffering. As far as I can tell, animal suffering and human suffering are comparably important from a utility-function standpoint, but the difference in EV between alleviating human and animal suffering is huge -- the difference in potential impact on the future between a suffering human vs a non-suffering human is massive compared to that between a suffering animal and a non-suffering animal. Basically, it seems like alleviating one human's suffering has more potential to help the far future than alleviating one animal's suffering. A human who might be incapacitated to say, deal with x-risk might become helpful, while an animal is still not going to be consequential on that front. So my opinion winds up being something like "We should help the animals, but not now, or even soon, because other issues are more important and more pressing".
1threewestwinds
I agree with this point entirely - but at the same time, becoming vegetarian is such a cheap change in lifestyle (given an industrialized society) that you can have your cake and eat it too. Action - such as devoting time / money to animal rights groups - has to be ballanced against other action - helping humans - but that doesn't apply very strongly to innaction - not eating meat. You can come up with costs - social, personal, etc. to being vegetarian - but remember to weigh those costs on the right scale. And most of those costs disappear if you merely reduce meat consumption, rather than eliminate it outright.
3Jiro
By saying this, yoiu're trying to gloss over the very reason why becoming vegetarian is not a cheap change. Human beings are wired so as not to be able to ignore having to make many minor decisions or face many minor changes, and the fact that such things cannot be ignored means that being vegetarian actually has a high cost which involves being mentally nickel-and-dimed over and over again. It's a cheap change in the sense that you can do it without paying lots of money or spending lots of time, but that isn't sufficient to make the choice cheap in all meaningful senses. Or to put it another way, being a vegetarian "just to try it" is like running a shareware program that pops up a nag screen every five minutes and occasionally forces you to type a random phrase in order to continue to run. Sure, it's light on your pocketbook, doesn't take much time, and reasding the nag screens and typing the phrases isn't difficult, but that's beside the point.
0threewestwinds
As has been mentioned elsewhere in this conversation, that's a fully general argument - it can be applied to every change one might possibly make in one's behavior. Let's enumerate the costs, rather than just saying "there are costs." * Money wise, you save or break even. * It has no time cost in much of the US (most restaurants have vegetarian options). * The social cost depends on your situation - if you have people who cook for you, then you have to explain the change to them (in Washington state, this cost is tiny - people are understanding. In Texas, it is expensive). * The mental cost is difficult to discuss in a universal way. I found them to be rather small in my own case. Other people claim them to be quite large. But "I don't want to change my behavior because changing behavior is hard" is not terribly convincing. Your discounting of non-human life has to be rather extreme for "I will have to remind myself to change my behavior" to out weigh an immediate, direct and calculable reduction in world suffering.
3Said Achmiz
This is false. Unless you eat steak or other expensive meats on a regular basis, meat is quite cheap. For example, my meat consumption is mostly chicken, assorted processed meats (salamis, frankfurters, and other sorts of sausages, mainly, but also things like pelmeni), fish (not the expensive kind), and the occasional pork (canned) and beef (cheap cuts). None of these things are pricy; I am getting a lot of protein (and fat and other good/necessary stuff) for my money. Do you eat at restaurants all the time? Learning how to cook the new things you're now eating instead of meat is a time cost. Also, there are costs you don't mention: for instance, a sudden, radical change in diet may have unforeseen health consequences. If the transition causes me to feel hungry all the time, that would be disastrous; hunger has an extreme negative effect on my mental performance, and as a software engineer, that is not the slightest bit acceptable. Furthermore, for someone with food allergies, like me, trying new foods is not without risk.
3Jiro
And it would be correct to deny that a change that would possibly be made to one's behavior is "such a cheap change" that we don't need to weigh the cost of the change very much. That only applies to someone who already agrees with you about animal suffering to a sufficient degree that he should just become a vegetarian immediately anyway. Otherwise it's not all that calculable.
6Xodarap
I wasn't able to glean this from your other article either, so I apologize if you've said it before: do you think non-human animals suffer? Or do you believe they suffer, but you just don't care about their suffering? (And in either case, why?)
1jefftk
I think suffering is qualitatively different when it's accompanied by some combination I don't fully understand of intelligence, self awareness, preferences, etc. So yes, humans are not the only animals that can suffer, but they're the only animals whose suffering is morally relevant.

jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thoughts-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

0Kawoomba
"Accompanied" can also mean "reflected upon after the fact". I agree with your last sentence though.

How certain are you that there is such a qualitative difference, and that you want to care about it? If there is some empirical (or perhaps also normative) uncertainty, shouldn't you at least attribute some amount of concern for sentient beings that lack self-awareness?

2thebestwecan
I second this. Really not sure what justifies such confidence.
1Xodarap
It strikes me that the only "disagreement" you have with the OP is that your reasoning isn't completely spelled out. If you said, for example, "I don't believe pigs' suffering matters as much because they don't show long-term behavior modifications as a result of painful stimuli" that wouldn't be a speciesist remark. (It might be factually wrong, though.)
0Emile
There's missing something at the end, like "... is morally relevant", right?
0jefftk
Fixed; thanks!
4Estarlio
How do you avoid it being kosher to kill you when you're asleep - and thus unable to perform at your usual level of consciousness - if you don't endorse some version of the potential principle? If you were to sleep and never wake, then it wouldn't necessarily seem wrong, even from my perspective, to kill you. It seems like your potential for waking up that makes it wrong.
6jefftk
Killing me when I'm asleep is wrong for the same reason as killing me instantly and painlessly when I'm awake is wrong. Both ways I don't get to continue living this life that I enjoy. (I'm not as anti-death as some people here.)
0Estarlio
So, presumably, if you were destined for a life of horrifying squicky pain some time in the next couple of weeks, you'd approve of me just killing you. I mean ideally you'd probably like to be killed as close to the point HSP as possible but still, the future seems pretty important when determining whether you want to persist - it's even in the text you linked So, bearing in mind that you don't always seem to be performing at your normal level of thought - e.g. when you're asleep - how do you bind that principle so that it applies to you and not infants?
0jefftk
I don't think you should kill infants either, again for the "effect it has on those that remain and because it removes the possibility for future joy on the part of the deceased" logic.
0Estarlio
How do you reconcile that with:
2jefftk
The "as long as the people are ok with it" deals with the "effect it has on those that remain". The "removes the possibility for future joy on the part of the deceased" remains, but depending on what benefits the society was getting out of consuming their young it might still come out ahead. The future experiences of the babies are one consideration, but not the only one.
0Estarlio
Granted, but do you really think that they're going to be so incredibly tasty that the value people gain from eating babies over not eating babies outweighs the loss of all the future experiences of the babies? To link that back to the marginal cases argument, which I believe - correct me if I'm wrong - you were responding to: Do you think that meat diets are just that much more tasty than vegetarian diets that the utility gained for human society outweighs the suffering and death of the animals? (Which may not be the only consideration, but I think at this point - may be wrong - you'd admit isn't nothing.) If so, have you made an honest attempt to test this assumption for yourself by, for instance, getting a bunch of highly rated veg recipes and trying to be vegetarian for a month or so?
3jefftk
The value a society might get from it isn't limited to taste. They could have some sort of complex and fulfilling system set up around it. But I think you're right, that any world I can think of where people are eating (some of) their babies would be improved by them switching to stop doing that. The "loss of all the future experiences of the babies" bit doesn't apply here. Animals stay creatures without moral worth through their whole lives, and so the "suffering and death of the animals" here has no moral value.
0Estarlio
Pigs can meaningfully play computer games. Dolphins can communicate with people. Wolves have complex social structures and hunting patterns. I take all of these to be evidence of intelligence beyond the battery farmed infant level. They're not as smart as humans but it's not like they've got 0 potential for developing intelligence. Since birth seems to deprive your of a clear point in this regard - what's your criteria for being smart enough to be morally considerable, and why?
1rocurley
If you're considering opening a baby farm, not opening the baby farm doesn't mean the babies get to live fulfilling lives: it means they don't get to exist, so that point is moot.
0Estarlio
If you view human potential as valuable then you end up saying something like that people should maximise that via breeding up to whatever the resource boundary is for meaningful human life. Unless that is implicitly bound - which I think to be a reasonable assumption to make for most people's likely world views.
4Jabberslythe
Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination? What if you were killed immediately afterwards, so long term memories wouldn't come into play?
4jefftk
If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense. If you offered me the choice between: A) 50% chance you are tortured and then released, 50% chance you are killed immediately B) 50% chance you are tortured and then killed, 50% chance you are released immediately I would strongly prefer B. Is that what you're asking?
0Jabberslythe
If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms? I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.
1jefftk
I'm sorry, I'm confused. Which two situations? I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.
1Jabberslythe
A) Being tortured as you are now B) Having your IQ and cognitive abilities lowered then being tortured. EDIT: I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory. Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.
0jefftk
Strong preference for (B), having my cognitive abilities lowered to the point that there's no longer anyone there to experience the torture.
1MugaSofer
Those are not the same thing. They're not even remotely similar beyond both involving brain surgery. Me too, but I never could persuade the people arguing for it of this fact :(
1jefftk
Agreed. I was attempting to give an example of other ways in which I might find torture more palatable if I were modified first. Right, which is why this argument isn't actually a straw-man and why ice9's post is useful.
0MugaSofer
Ah, OK. Hah, yes. Sorry, I thought you were complaining it was actually a strawman :/ Whoops.

I strongly object to the term "speciesism" for this position. I think it promotes a mindkilled attitude to this subject ("Oh, you don't want to be speciesist, do you? Are you also a sexist? You pig?").

You pig?

Speciesist language, not cool!

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness. And the parallels to racism or sexism are valid, I think.

[-]Zvi140

Haha only serious. My brain reacts with terror to that reply, with good reason: It has been trained to. You're implicitly threatening those who make counter-arguments with charges of every ism in the book. The number of things I've had to erase because one "can't" say them without at least ending any productive debate, is large.

Haha! Anyway, I agree that it promotes mindkilled attitude (I'm often reading terrible arguments by animal rights people), but on the other hand, for those who agree with the arguments, it is a good way to raise awareness.

I don't think that's a "but on the other hand;" I think that's a "it is a good way to raise awareness because it promotes mindkilled attitude."

0Said Achmiz
Actually, I think it's precisely the parallels to racism and sexism that are invalid. Perhaps ableism? That's closer, at any rate, if still not really the same thing.
8Zvi
It's not only the term. The post explicitly uses that exact argument: Since sexism and racism are wrong, and any theoretical argument that disagrees with me can be used to argue for sexism or racism, if you disagree with me you are a sexist, which is QED both because of course you aren't sexist/racist and because regardless, even if you are, you certainly can't say such a thing on a public forum!
9Lukas_Gloor
No no no. I'm not saying "Since sexism and racism are wrong," - I'm saying that those who don't want their arguments to be of the sort that it could analogously justify racism or sexism (even if the person is neither of those), then they would also need to reject speciesism.
0Zvi
Mindkilling-related issues aside, I am going to do my best to un-mindkill at least one aspect of this question, which is why the frame change. Is this similar to arguing that if the bloody knife was the subject of an illegal search, which we can't allow because allowing that would lead to other bad things, and therefore is not admissible in trial, then you must not only find the defendant not guilty but actually believe that the defendant did not commit the crime and should be welcome back to polite society?
3Lukas_Gloor
No, what makes the difference is that you'd be mixing up the normative level with the empirical one, as I explained here (parent of the linked post also relevant).
0Zvi
In that post, you seem to be making the opposite case: That you should not reject X (animal testing) simply because your argument could be used to support repugnant proposal Y (unwilling human testing); you say that the indirect consequences of Y would be very bad (as they obviously would) but then you don't make the argument that one must then reject X, instead that you should support X but reject Y for unrelated reasons, and you are not required to disregard argument Q that supports both X and Y and thereby reject X (assuming X was in fact utility increasing). Or, that the fact that a given argument can be used to support a repugnant conclusion (sexism or racism) should not be a justification for not using an argument. In addition, the argument for brain complexity scaling moral value that you now accept as an edit is obviously usable to support sexism and racism, in exactly the same way that you are using as a counterargument: For any given characteristic, different people will have different amounts of that characteristic, and for any two groups (male / female, black / white, young / old, whatever) there will be a statistical difference in that measurement (because this isn't physics and equality has probability epsilon, however small the difference) so if you tie any continuous measurement to your moral value of things, or any measurement that could ever not fully apply to anything human, you're racist and sexist.
1Lukas_Gloor
Exactly. This is because the overall goal is increasing utility, and not a societal norm of non-discrimination. (This is of course assuming that we are consequentialists.) My arguments against discrimination/speciesism apply at the normative level, when we are trying to come up with a definition of utility. I wouldn't classify this as sexism/racism. If there are sound reasons for considering the properties in question relevant, then treating beings of different species differently because of a correlation between species, and not because of the species difference itself, is in my view not a form of discrimination. As I wrote:
3Xodarap
It's not sexist to say that women are more likely to get breast cancer. This is a differentiation based on sex, but it's empirically founded, so not sexist. Similarly, we could say that ants' behavior doesn't appear to be affected by narcotics, so we should discount the possibility of their suffering. This is a judgement based on species, but is empirically founded, so not speciesist. Things only become ___ist if you say "I have no evidence to support my view, but consider X to be less worthy solely because they aren't in my race/class/sex/species." I genuinely don't think anyone on LW thinks speciesism is OK.
8Said Achmiz
You evade the issue, I think. It is sexist (or _ist) if you say "I consider X to be less worthy because they aren't in my race/class/sex/species, and I do have evidence to support my view."? Sure, saying women are more likely to get breast cancer isn't sexist; but this is a safe example. What if we had hard evidence that women are less intelligent? Would it be sexist to say that, then? (Any objection that contains the words "on average" must contend with the fact that any particular women may have a breast cancer risk that falls anywhere on the distribution, which may well be below the male average.) No one is saying "I think pigs are less worthy than humans, and this view is based on no empirical data whatever; heck, I've never even seen a pig. Is that something you eat?" We have tons of empirical data about differences between the species. The argument is about exactly which of the differences matter, and that is unlikely to be settled by passing the buck to empiricism.
3MugaSofer
Upvoted just for this.
2A1987dM
I wouldn't say it is, but other people would use the word “sexist” with a broader sense than mine (assuming that each person defines “sexism” and “racism” in analogous ways).
1Xodarap
No. Because your statement "X is less worthy because they aren't of my gender" in that case is synonymous with "X is less worthy because they lack attribute Y", and so gender has left the picture. Hence it can't be sexist.
4Said Achmiz
Ok, but if you construe it that way, then "X is less worthy just because of their gender" is a complete strawman. No one says that. What people instead say is "people of type T are inferior in way W, and since X is a T, s/he is inferior in way W". Examples: "women are less rational than men, which is why they are inferior, not 'just' because they're women"; "black people are less intelligent than white people, which is why they are inferior, not 'just' ..."; etc. By your construal, are these things not sexist/racist? But then neither is this speciesist: "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans".
3Xodarap
I think we are getting into a discussion about definitions, which I'm sure you would agree is not very productive. But I would absolutely agree that your statement "nonhumans are not self-aware, unlike humans, which is why they are inferior, not 'just' because they're nonhumans" is not speciesist. (It is empirically unlikely though.)
0Said Achmiz
Agreed entirely, let's not argue about definitions. Do we disagree on questions of fact? On rereading this thread, I suspect not. Your thoughts?
1Xodarap
I think so? You seem to have indicated in a few comments that you don't believe nonhuman animals are "self-aware" or "conscious" which strikes me as an empirical statement? If this is true (and I give at least 30% credence that I've just been misunderstanding you), I'd be interested to hear why you think this. We may not end up drawing the moral line at the same place, but I think consciousness is a slippery enough subject that I at least would learn something from the conversation.
-1Said Achmiz
Ok. Yes, I think that nonhuman animals are not self-aware. (Dolphins might be an exception. This is a particularly interesting recent study.) Dolphins aside, we have no reaso