It's common for people who approach helping animals from a quantitative direction to need some concept of "moral weights" so they can prioritize. If you can avert one year of suffering for a chicken or ten for shrimp which should you choose? Now, moral weight is not the only consideration with questions like this, since typically the suffering involved will also be quite different, but it's still an important factor.

One of the more thorough investigations here is Rethink Priorities' moral weights series. It's really interesting work and I'd recommend reading it! Here's a selection from their bottom-line point estimates comparing animals to humans:

Humans 1 (by definition)
Chickens 3
Carp 11
Bees 14
Shrimp 32

If you find these surprisingly low, you're not alone: that giving a year of happy life to twelve carp might be more valuable than giving one to a human is for most people a very unintuitive claim. The authors have a good post on this, Don't Balk at Animal-friendly Results, that discusses how the assumptions behind their project make this kind of result pretty likely and argues against putting much stock in our potentially quite biased initial intuitions.

What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view. Someone who thinks humans matter far more than animals and wants to devote their career to making the world better is much more likely to choose a career focused on people, like reducing poverty or global catastrophic risk. Even if someone came into the field with, say, the median initial view on how to weigh humans vs animals I would expect working as a junior person in a community of people who value animals highly would exert a large influence in that direction regardless of what the underlying truth. If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.

When using the moral weights of animals to decide between various animal-focused interventions this is not a major concern: the donors, charity evaluators, and moral weights researchers are coming from a similar perspective. Where I see a larger problem, however, is with broader cause prioritization, such as Net Global Welfare May Be Negative and Declining. The post weighs the increasing welfare of humanity over time against the increasing suffering of livestock, and concludes that things are likely bad and getting worse. If you ran the same analysis with different inputs, such as what I'd expect you'd get from my hypothetical research group above, however, you'd instead conclude the opposite: global welfare is likely positive and increasing.

For example, if that sort of process ended up with moral weights that were 3x lower for animals relative to humans we would see approximately flat global welfare, while if they were 10x lower we'd see increasing global welfare:

See sheet to try your own numbers; original chart digitized with via graphreader.com.

Note that both 3x and 10x are quite small compared to the uncertainty involved in coming up with these numbers: in different post the Rethink authors give 3x (and maybe as high as 10x) just for the likely impact of using objective list theory instead of hedonism, which is only one of many choices involved in estimating moral weights.

I think the overall project of figuring out how to compare humans and animals is a really important one with serious implications for what people should work on, but I'm skeptical of, and put very little weight on, the conclusions so far.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 7:27 PM

If your moral theory gives humanity less moral worth than carp, so much the worse for your moral theory.

If morality as a concept irrefutably proves it, then so much the worse for morality.

I will add that even taking humans aside, the remaining comparisons seem still quite bonkers to me. 1 carp ~ 1 bee sounds really strange.

I agree. I think for me, the intuition mostly stems from neuron count. I also agree, with the authors of the sequence, that neuron counts are not an ideal metric. What confuses me is that instead these estimates seem to simply take biologic “individuals” as a basic unit for moral weight and then adjust with uncertainty from there. I think that seems even more misguided than neuron count. Bees and Ants are hacking the “individual”-metric just by having small brains spread over lots of individual bees/ants. Beehive > Human seems absurd.

I agree with the first, but not the second sentence. (Although I am not sure what it is supposed to imply. I can imagine being convinced Shrimps are more important, but the bar for evidence is pretty high)

I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always. If morality requires I sacrifice things I actually care about for carp, I would discard morality with no hesitation.

I mean that my end goals point towards a vague prospering of human-like minds, with a special preference for people close to me. It aligns with morality often, but not always.

What remains? I think this is basically what I usually think of as my own  (it just happens to contain a term with everyone else's). Are you sacrificing what other people think 'the right thing to do' is? What future you think what the right thing to do would have been? What future uplifted Koi think what the right thing to do would have been?

By the stated ratio of 1:14 for bees to humans, the welfare of bees as a whole exceeds the welfare of human civilization by a large margin. There are trillions of bees but only a few billion humans. We should be devoting almost all of global production to extending bee lifespan and improving their quality of life even if that means that most of humanity suffers horribly for it.

Even with their short lifespans (which we must help them increase), destroying a single hive for virtually any reason should be considered a crime of similar gravity to human mass-murder.

"We should be devoting almost all of global production..." and "we must help them increase" are only the case if:

  1. There are no other species whose product of [moral weight] * [population] is higher than bees, and
  2. Our actions only have moral relevance for beings that are currently alive.

(And, you know, total utilitarianism and such.)

True.

For (1): Given the criteria outlined in their documents, ants likely outweigh everything else. There are tens of quadrillions of them, with a weight adjusted by credence of sentience on the order of 0.001 estimated from their evaluations of a few other types of insects. So current ant population would account for thousands of times more moral weight than current humanity instead of only the few dozen times more for bees.

Regarding (2): Extending moral weight to potential future populations presumably would mean that we ought to colonize the universe with something like immortal ants - or better yet, some synthetic entities that requires less resources per unit of sentience. As it is unlikely that we are the most resource-efficient way to maintain and extend this system, we should extinguish ourselves as the project nears completion to make room for more efficient entities.

[-]Jiro7mo42

What's the moral weight of video game characters? Or electrons? Especially if you're going to count bees so much.

Never mind having to prioritize bees or ants, we should prioritize the welfare of video game characters by this standard.

Of course, the problem is that putting numbers like 14 on bees is just an attempt to make things seem falsely precise.

I've seen estimates of moral weight before that vary by several orders. The fact of such strong disagreement seems important here.