This post: https://slatestarcodex.com/2019/03/26/cortical-neuron-number-matches-intuitive-perceptions-of-moral-value-across-animals/

begs a very important question as part of its central premise. Even given the idea that animals have moral weight, it does not follow that there exist a number of them that are of equal moral weight to a human (or another kind of animal, etc). There exist infinite sequences that converge to finite values.

It seems pretty clear to me that moral weight is not linearly additive. This is why we consider it worse when a species goes from 1,000 members to 0 members than when it goes from 1,001,000 members to 1,000,000 members, for instance.

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 6:50 AM

For anyone who does think that both 1) chickens have non-zero moral value, and 2) moral value is linearly additive, are you willing to bite the bullet that there exist a number of chickens such that it would be better to cause that many chickens to continue to exist at the expense of wiping out all other sentient life forever? This seems so obviously false and also so obviously the first thing to think of when considering 1 and 2 that I am confused there exist folks who accept 1 and 2

My position is "chickens have non-zero moral value, and moral value is not linearly additive." That is, any additional chicken suffering is bad, any additional chicken having a pleasant life is good, and the total moral value of all chickens as the number of chickens approaches infinity is something like 1/3rd of a human

I think 1, and 2 in only a limited sense. I suspect that moral weight is very close to linear for small changes, what 99% of historical humanity (and maybe 98% of future humanity) experience, but diverges greatly when talking about extremes. So "shut up and multiply" works just fine for individual human-scale decisions, and linear calculations do very well in daily life. But I don't accept any craziness from bizarre thought experiments.

A thought experiment that others have proposed to me when I brought up intuitions like this:

If it is worse for a species to go from 1000 members to 0 members than when it goes from 1,001,000 members to 1,000,000 members, then you start behaving very differently based on slightly different interpretations of physics. Compare your actions in the following scenarios:

  • There exists a planet just outside of humanities light-cone that has one million elephants on it, nothing you do can ever interact with them. How much would you pay to prevent the last one-thousand elephants that are still living on earth to survive?
  • There does not exist another elephant anywhere else in the universe, how much would you be willing to pay to save the last one-thousand elephants on earth
  • The universe is infinite and even in areas without life every 10^10^10 light years an elephant randomly assembles due to quantum fluctuations. How much would you be willing to pay for the last 1000 elephants on earth?
  • The universe is finite but the quantum multiverse contains many worlds in which elephants exist, though you can never experience them. How much do you pay for the last 1000 elephants on earth?

These arguments do not conclude definitely that you can't have this kind of diminishing returns, but it does mean that your morality starts depending on very nuanced facts about cosmology.

Personally, I would modus tollens this and take it as an example of why it's absurd to morally value things in other universes or outside my light cone.

Do you bite the bullet that this means the set of things you morally value changes discontinuously and predictably as things move out of your light cone? (Or is there some way you value things less as they are less "in" your light cone, in some nonbinary way?)

It seems natural to weight things according to how much I expect to be able to interact with them.

Obviously that means my weightings can change if I unexpectedly gain or lose the ability to interact with things, but I can't immediately think of any major problems with that.

Yeah, that feels like one of the hypotheses I am attracted to, though it feels wrong for a variety of other reasons.

Yeah, that is also roughly my response, though the other thought experiment I suggested in another comment feels very similar to me, and that reduction doesn't really work in that case (though similar reductions kind of do). Interested in your response to that question.

My answers (contingent on the view OP describes, which is close enough to my own view for the sake of this discussion) would, of course, be the same in each of these scenarios. Why would they be different?

Do you have an intuition that the last 1000 elephants (or humans, if you consider elephants to have weight zero or some such), are more valuable than 1000 elephants/humans if there are a million of them)?

(It's not obvious you should have such an intuition, just that it seemed like the conversation was predicated on that, and the most obvious reason it'd be the case is that one might care about preserving a species existing at all [for various possible reasons] than being slightly more numerous)

That is the view the OP describes (with respect to elephants—it’s different for humans), and as I said, my own view is effectively the same, for the purpose of the current conversation.

of course

As someone who also roughly beliefs what is outlined in the OP, I don't think I would pay the same in each of these scenarios. I do have a model that I "should" pay the same in each scenario in a way that I cannot easily externalize, and definitely not in a way that is obvious to me.

I don’t think I understand what you mean by these two statements, taken together:

I don’t think I would pay the same in each of these scenarios

I do have a model that I “should” pay the same in each scenario

So… you do or you don’t think that it’s right to pay the same in each given case…?

If you were to put a gun to my head and force me to give an answer in a minute, I think, though it might honestly depend on the day, I would probably pay the same in each scenario. Though I would assign significant probability to having made the "wrong" choice (and I wish I could give you a clear and precise definition of what I mean by "wrong" here, but my metaethics have not reached reflective equilibrium, and so the closest thing I have is just "would preferred to have made a different choice given 10000 more years to think, with greater intelligence, all the world's knowledge at my fingertips, etc.")

Internally, this situation feels like I have something closer to an inside-view gears-like model that predicts that I should pay different amounts, combined with a "conservation of energy"/"dutch-book" like model that tells me that if I were to pay different amounts, I would have to be irrational in some way, even if I don't know how precisely yet.

In all but the second scenario, more than 1,000,000 million elephants do indeed "exist" (though the point of the exercise is at least in part to poke at what it means for something to exist), and so based on the argument made above, the first scenario would suggest the value of the marginal 1000 (which would move the total number of elephants from 1,001,000 to 1,000,000) elephants to be lower than in the second scenario (which would move the total number of elephants from 1,000 to 0).

Continuing in the tradition of socratic questioning, if you would respond with the same amount in all the scenarios above, would you also respond the same if there were 1 million elephants buried deep underground in a self-sustaining bunker on a different planet in our solar system, and you would never expect to interact with them further? Would your answer change if there was an easily available video-feed of the elephants that you could access from the internet?

would you also respond the same if there were 1 million elephants buried deep underground …

Yes.

Would your answer change if there was an easily available video-feed of the elephants that you could access from the internet?

No.

I do want to note, however, that you have transitioned from “slightly different interpretations of physics” and “very nuanced facts about cosmology” to “extremely improbable counterfactual scenarios”. Those are importantly different categories of hypothetical scenario.

That aside, however:

In all but the second scenario, more than 1,000,000 million elephants do indeed “exist”

That is not the relevant consideration. From the OP:

This is why we consider it worse when a species goes from 1,000 members to 0 members than when it goes from 1,001,000 members to 1,000,000 members, for instance.

If all the elephants on Earth die, but elephants still exist in an alternate universe, it is not correct to say that “the elephant species yet survives”. Rather, the appropriate description would be “the elephant species has gone extinct; matters may, however (in this as in other things), be different in some alternate universe”.

Your 1st and 3rd scenario (i.e., the other ones where some extra-terrestrial elephants remain) similarly do not introduce any interesting facts about the elephant species.

Continuing more with the thought experiments, since I find your answers (as well as your confidence in them) surprising. I have a sense that you believe that the responses to these questions are obvious, and if that is true, I would be interested in whether you can generate an explanation that makes them obvious to me as well.

Let's imagine the inverse scenario in which you travel in a spaceship away from earth to a new planet far away and you never expect to come back. The new planet has a million elephants on it, old earth has only 1000 thousand elephants left on it. Imagine the alternative scenario in which you never leave earth and stay with the 1000 elephants, and also never expect to leave for any other planet. Would you pay the same amount to save the 1000 elephants on earth in either case?

To make this concrete, the two compared scenarios are:

1. You are on earth, there are a million elephants on a far away planet you never expect to see, and you are offered a trade to save the last 1000 animals on earth

2. You are on a distant planet with a million elephants on it, far away earth's last 1000 elephants are about to die and you are offered a trade to save them

If so, how is this different from the scenario in which there are a million elephants in a bunker you will never visit? Also, does this mean that your moral evaluation of the same group of animals changes as you travel in a spaceship from one planet to another?

(Also, in considering these, try to control for as much of the secondary benefits of elephants as possible. I.e. maybe imagine that you are the last human and try to account for the potential technological, hedonic and cultural benefits of having elephants around)

If all the elephants on Earth die, but elephants still exist in an alternate universe, it is not correct to say that “the elephant species yet survives”. Rather, the appropriate description would be “the elephant species has gone extinct; matters may, however (in this as in other things), be different in some alternate universe”.

I don't think everyone who agrees with the OP would agree with this statement. At least I do not. Though this feels more like arguing definitions in a way that is less likely to result in much productive discourse.

Well, as far as the prevalence of my view goes, I can’t speak to that. But I do not think this is a matter of arguing definitions—rather, it’s a real difference in values. Lanrian’s comment elsethread mentions one important category of difference here.

Oh, sorry. I didn't mean to imply that there isn't a real difference here. I was just commenting on the specific statement "the appropriate description would be" which does seem primarily to be a statement about wording, and not about ethics.

My reply to all of those is "I do not believe you. This sounds like an attempt at something akin to Pascal's Mugging. I do not take your imaginary elephants into consideration for the same reason I do not apply moral weight to large numbers of fictional elephants in a novel."

[Note: this comment comes with small amounts of attempted mindreading, which I think one should be careful with when arguing online. If this doesn't feel like a fair stab at what you felt your underlying reasoning was, apologies]

If I (Raemon) had said the sentence you just said, my motivation would have been more likely to be a defense against clever, manipulative arguers (a quite valuable thing to have a defense against) than an attempt to have a robust moral framework that can handle new information I might learn about weird edge cases.

Say that rather than a person coming to you and giving you a hypothetical example, the person reflecting upon the hypothetical elephants is you, after having existed for a long enough time that you've achieved all your most pressing goals, and you've actually studied cosmology yourself, and come the conclusion that the hypothetical elephants most likely exist.

I think it makes sense for people not to worry about moral quandaries that aren't relevant to them when they have more pressing things to worry about. I think it's important not to over-apply the results of thought experiments (i.e. in real life there's no way you could possibly know that pushing a fat man off a bridge will stop a trolley and save five lives).

But insofar as we're stepping into the domain of "figure out ethics for real, in a robust fashion", it seems useful to be able to seriously entertain thought experiments, so long as they come properly caveated with "As long as I'm running on human hardware I shouldn't make serious choices about hypothetical elephants."

I'd be somewhat surprised if the bgaesop-who's-studied-cosmology, had decided that ironing out their moral edgecases was their top priority and wanted to account for moral uncertainty and so actually did their own research to figure out whether elephants-outside-the-lightcone existed or mattered... would end up saying that the reason they don't matter is the same reason fictional elephants don't matter. (Although I can more easily imagine hypothetical future you deciding they didn't matter for other reasons)

It seems to me that figuring out the answers to questions that will, and can, only be faced by me-who-has-studied-cosmology-for-a-century (or similar), can, and should, be left to me-who-has-studied-cosmology-for-a-century to figure out. Why should I, who exist now, need to have those answers?

I think that's totally fair, but in that case I think it makes more sense to say upfront "this conversation doesn't seem to be meaningful right now" or "for the time being I only base my moral system on things I'm quite confident of" or some such, rather that expressing particular opinions about the thought experiment.

Either you have opinions about the thought experiment, in which case you're making your best guess and/or preferred meta-strategy for reaching reflective equilibrium or some-such, or you're not, in which case why are you discussing it at all?

That sort of answer is indeed appropriate, but only contingent on this notion of “a version of me who has studied cosmology, etc., for a long time, and both has opinions on certain moral quandaries and also encounters such in practice”. If we set aside this notion, then I am free to have opinions about the thought experiment right now.

Sure, but bgaesop's "I don't believe" is disregarding the thought experiment, which is the part I'm responding to. (I'm somewhat confused right now how much you're speaking for yourself, and how much you're speaking on behalf of your model of bgaesop or people like him)

(I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)

The two are close enough for the present purposes.

Meanwhile, the point of the thought experiment is not for us to figure out the answer with any kind of definitiveness, but to tease out whether the thought experiment is exploring factors that should even be part of our model at all. (the answer to which be no)

At the very least, you can have some sense of whether you value things that you are unlikely to directly interact with (and/or, how confused you are about that, or how confused you are about how reliably you can tell when you might interact with something)

I don't understand the Pascal's mugging objection. What is the mugging here? Why are they "my elephants"?

I am not trying to convince you of anything here, I feel honestly confused about this question, and this is a question that I have found useful to ask myself in order to clarify my thinking on this.

What would your response be to the other question I posed in the thread?

>What is the mugging here?

I'm not sure what the other-galaxy-elephants mugging is, but my anti-Pascal's-mugging defenses are set to defend me against muggings I do not entirely understand. In real life, I think that the mugging is "and therefore it is immoral of you to eat chickens."

>Why are they "my elephants"?

You're the one who made them up and/or is claiming they exist.

I am not claiming that they exist. I am asking you to consider what you would do in the hypothetical in which you are convinced that they exist.

Replace "you" with "the hypothetical you who is attempting to convince hypothetical me they exist", then

In the scenario I postulated elsethread, I specified that hypothetical you was exploring the problem by themselves, no external person involved, which I think most closely captures the thought experiment as it is intended.

When people consider it worse for a species to go from 1000 to 0 members, I think it's mostly due to aesthetic value (people value the existence of a species, independent of the individuals), and because of option value (we might eventually find a good reason to bring back the animals, or find the information in their genome important, and then it's important that a few remain). However, none of these have much to do with the value of individual animals' experiences, which usually is what I think about when people talk about animals' "moral weight". People would probably also find it tragic for plants to go extinct (and do find languages going extinct tragic), despite these having no neurons at all. I think the distinction becomes more clear if we consider experiences instead of existence: to me, it's very counterintuitive to think that an elephant's suffering matter less if there are more elephants elsewhere in the world.

To be fair, scope insensitivity is a known bias (though you might dispute it being a bias, in these cases), so even if you account for aesthetic value and option value, you could probably get sublinear additivity out of most people's revealed preference. On reflection, I personally reject this for animals, though, for the same reasons that I reject it for humans.

When people consider it worse for a species to go from 1000 to 0 members, I think it's mostly due to aesthetic value (people value the existence of a species, independent of the individuals), and because of option value

Yes, these are among the reasons why moral value is not linearly additive. I agree.

People would probably also find it tragic for plants to go extinct (and do find languages going extinct tragic), despite these having no neurons at all.

Indeed, things other than neurons have value.

I personally reject this for animals, though, for the same reasons that I reject it for humans.

Really? You consider it to be equivalently bad for there to be a plague that kills 100,000 humans in a world with a population of 100,000 than in a world with a population of 7,000,000,000?

Yes, these are among the reasons why moral value is not linearly additive. I agree.

I think the SSC post should only be construed as arguing about the value of individual animals' experiences, and that it intentionally ignores these other sources of values. I agree with the SSC post that it's useful to consider the value of individual animals' experiences (what I would call their 'moral weight') independently of the aesthetic value and the option value of the species that they belong to. Insofar as you agree that individual animals' experiences add up linearly, you don't disagree with the post. Insofar as you think that individual animals' experiences add up sub-linearly, I think you shouldn't use species' extinction as an example, since the aesthetic value and the option value are confounding factors.

Really? You consider it to be equivalently bad for there to be a plague that kills 100,000 humans in a world with a population of 100,000 than in a world with a population of 7,000,000,000?

I consider it equally bad for the individual, dying humans, which is what I meant when I said that I reject scope insensitivity. However, the former plague will presumably eliminate the potential for humanity having a long future, and that will be the most relevant consideration in the scenario. (This will probably make the former scenario far worse, but you could add other details to the scenario that reversed that conclusion.)

Yes, these are among the reasons why moral value is not linearly additive.

The moral value can still be linearly additive, but additive over more variables than the ones you considered. For example, the existence of the species, and the existence of future members of the species.

Yeah, I agree that at the very least this consideration shouldn't get swept under the rug.

[epistemic status: mulling over my moral intuitions in realtime, almost certainly will not endorse this upon further reflection]

When Many Worlds, Big Universe, or "Lots of Simulations" comes into the picture I become very confused about how to aggregate experiences.

But my current best guess intuitions are:

  • suffering adds up linearly (but if suffering exists in insects or bacteria or other Brian Tomasikian considerations like simple feedback loops, it doesn't make sense to deal with that until humanity has their collective shit in order, and can think real hard about it). It is always morally commendable to reduce unnecessary suffering, but not always morally obligatory.
  • positive experiences add up less linearly, and "variety of positive experiences" matters more.
    • I have some sense that the positive experiences of rats adds up to a finite value, which I'd guess is greater than the value of a single human but less than some finite number of humans (whose positive experiences can converge on a higher finite number because there's a wider variety of experiences accessible to them)
    • By contrast, if bacteria had anything that I'd classify as a positive experience, I think it'd cap out at less valuable than one human.
  • there almost certainly exist beings whose capacity for both positive and negative experiences exceeds humanity
  • some degree of contractualism matters, which includes acausal contracts I might wish I had made with simulators, in this universe or others. I want to have some kind of consistent contractual policy, in which simulators or aliens who are more morally valuable than me still have contractual motivation to treat me well (in return for which they might be treated better by other hypothetical simulators or aliens).
    • I think this implies making reasonable efforts to treat well beings that I think are less morally valuable than me (but where "reasonable" might mean "first get humanity to something approximating a post scarcity shit-together situation")

''Animal intelligence correlates with X"
But since what we care about is suffering and not intelligence, we're implying that more intelligence = more suffering, which if true would mean that hurting a children or a baby causes less suffering than hurting an adult.
Yet I would argue that the opposite seems to be true given that intelligence allows us to cope and contextualize and understand our suffering better than a child.

At best, there is no correlation between intelligence and suffering.
At worse, the opposite is true.

.... Is it possible to have nonlinear accumulation be associative?

Cause I believe morality would need to be associative to be sane right?

Yeah, there are tons of associative functions. For example, f(x,y)=(x^k+y^k)^(1/k) is associative for any k, but linear only for k=1.