Merged two comments into one:
This argument rests on foundations of moral realism, which I don't think is actually a coherent meta-ethical view.
Under an anti-realist worldview, it makes total sense that we would assign axiological value in a way which is centered around ourselves. We often choose to extend value to things which are similar to ourselves, based on notions of fairness, or notions that our axiological system should be simple or consistent. But even if we knew everything about human vs cow vs shrimp vs plant cognition, there's no way of doing that which is objectively "correct" or "incorrect", only different ways of assembling competing instincts about values into a final picture.
Pain is bad because of how it feels. When I have a bad headache, and it feels bad, I don’t think “ah, this detracts from the welfare of a member of a sapient species.” No, I think it’s bad because it hurts.
I disagree with this point. If I actually focus on the sensation of severe pain, I notice that it's empty and has no inherent value. It's only when my brain relates the pain to other phenomena that it has some kind of value.
Secondly, even the fact that "pain" "feels" "like" "something" identifies the firing of neurons with the sensation of feeling in a way which is philosophically careless.
For an example which ties these points together, when you see something beautiful, it seems like the feeling of aesthetic appreciation is a primitive sensation, but this sensation and the associated value label that you give it only exist because of a bunch of other things.
A different example: currently my arms ache because I went to the gym yesterday, but this aching doesn't have any negative value to me, despite it "feeling" "bad".
Overall I don't think I can model your world-model very well. I think you believe in mind-stuff which obeys mental laws and is bound physical objects by "psychophysical laws" which means that any physical object which trips some kind of brain-ish-ness threshold essentially gets ensouled by the psychophysical laws binding a bunch of mind-stuff to it, which also cause the atoms of that brain-thing to move around differ. Then the atoms can move in a certain way which causes the mind-stuff to experience qualia, which are kind of primitive in some sense and have inherent moral value.
I don't know what role you think the brain plays in all this. I assume it's some role, since the brain does a lot of work.
I think you think that the inherent moral value is in the mental laws, which means that any brain with mind-stuff attached has a kind of privileged access to moral reasoning, allowing it to---eventually---come to an objectively correct view on what is morally good vs bad. Or in other words, morality exists as a kind of convergent value system in all mind-stuff, which influences the brains that have mind-stuff bound to them to behave in a certain way.
This argument rests on foundations of moral realism, which I don't think is actually a coherent meta-ethical view.
Under an anti-realist worldview, it makes total sense that we would assign axiological value in a way which is centered around ourselves. We often choose to extend value to things which are similar to ourselves, based on notions of fairness, or notions that our axiological system should be simple or consistent. But even if we knew everything about human vs cow vs shrimp vs plant cognition, there's no way of doing that which is objectively "correct" or "incorrect", only different ways of assembling competing instincts about values into a final picture.
Today I had a thought that fits something here (and the sort of 'extension' I thus propose in my other comment): we're always scared of paperclip maximizers, but if we look at the heaps of resources we throw at trying to a tiny bit increase our own welfare, compared to what scales of welfare improvements we could achieve in other humans or, arguably, in animals, we are already 99.9% clippy ourselves. I guess that's not an entirely new thought of course but found it quite fits main 'complaint' in OP.
Now, the attitude that most people have is that pain doesn’t matter much unless it’s experienced by humans. When rats are poisoned to death, no one cares much. But this seems like a very untenable position.
I actually think very many people do not very explicitly think exactly this. They'd say it's absolutely not okay to hurt a rat just for the sake of it. Instead, if they nearly perfectly ignore such pains, it is just a bit as with the millions of humans starving or dying of cheaply curable diseases in Africa - many of which could be saved for (in Western scales) trivial amounts of resources - which tend to merely be "statistics" for us: also animals tend to very quickly be purely statistics for us, or simply in a different way not automatically at the fore of our mind. So while all you write is imho +- exactly true it's not that specifically only a human/animal divide. Instead, we already fail to care in any meaningful way about suffering even if its other humans. Even if, yes, 'them' being animals is yet another factor facilitating in our mind the downgrading of all others who are not direct kin or cute eyes directly in front of us or so.
Crosspost of my blog article.
The Mormons say that God instructed Joseph Smith to have a bunch of hot, underage wives (they don’t usually phrase it that way). I’m skeptical. While it seems rather unlikely that God would be in favor of such an arrangement, it seems quite likely that Joseph Smith would be in favor of such an arrangement, and would wrongly attribute it to God. You should be suspicious when people have judgments that seem suspiciously convenient; where the judgments are a bit arbitrary but you can very easily come to see how they might have come to believe them mistakenly.
Similarly, if the Assyrians declare that they are God’s favorite people and that most of what God cares about is what happens to them, you should be a bit suspicious of that. It’s unlikely that God would pick a people and have it be the Assyrians, but it’s much more likely that the Assyrians would think they were God’s favorite group.
I have bad news: you people are like the Mormons and the hypothetical Assyrians.
There is a view that most modern people have which is suspiciously convenient in a similar way, and it is hugely relevant to how most people live. If people stopped believing it, doing so would radically upend their lives. The belief is that humans, morally, are the center of the universe. In other words, most of what matters morally is what happens to humans. Non-human animals, especially ones different from us biologically, barely matter at all.
That this belief is common should go without saying. People get outraged when you suggest that factory farming is much worse than human atrocities, even though it has about 8 million new victims every hour.1 Ordinary decision-making is driven entirely by the effects of our actions on other humans, and people get weirded out when you demonstrate any concern for wild animals.
People don’t even really care about slowly poisoning rats to death over the course of days, even though rats are quite smart. If a politician described mainly focusing on the welfare of non-human animals, no one would vote for them, even though that’s a lot more important than human welfare if we’re not the moral center of the universe.
But I think this belief is rather suspicious. It would be a bit too convenient if it turned out correct. It would be a surprise if humans really were nearly all of what mattered, but no surprise at all if humans falsely believed we were nearly all of what mattered.
Animals have been around for roughly 800 million years. On this view, for that entire period, nothing that happened mattered much. Despite animals suffering and dying in incomprehensible numbers—experiencing staggeringly unfathomably massive quantities of aggregate suffering—nothing very important was happening. Things only started to matter when, about 300,000 years ago, humans hit the scene.
(Lo: a graph).
That’s a very suspicious way for the graph to look. Nothing matters until we arrived? It’s not at all surprising that we think that. But it would be surprising if it were true. So there’s some reason for doubt.
Now, to be clear, I don’t think that it’s automatically suspicious that the universe’s value would go up a lot at some specific point. I think that too. Before there was conscious life, nothing very important happened in the universe. But that is because conscious life is a different kind of thing from non-conscious life. It’s quite a bit more suspicious if moral worth spikes around when we begin, despite our capacities being broadly continuous with other life.
This is especially true because humans have a long history of underestimating the moral importance of others. For much of history, people thought that only their own tribe mattered. Lots of people on Twitter with names like VitalismSigma69 still think that! Slavery and repression of women were historical universals. In just the last few hundred years, Americans shipped slaves from Africa to torment, enslave, and beat them. Our track record in taking seriously the moral importance of others leaves much to be desired.
All of this should make us very suspicious. Humans constantly underestimate the moral importance of those different from ourselves. And yet the common view is that:
And there are many more biases that lead us to underestimate the moral importance of wild animals. One bias comes from the fact that there are a lot of them. Literally quintillions. Our brains are bad at grokking large numbers. People will pay the same amount to save 200,000 birds as 2,000. If you threatened to torture someone for a billion years, they’d be just as scared as if you threatened to torture them for a million years, even though the first is 1,000 times worse! So if our brains can’t even grasp the moral importance of 2,000 instances of something, what hope do we have for quintillions of instances.
All of this is to say that we should actively distrust our intuitions about the moral importance of animals. It would be very surprising if those intuitions were correct. Any argument for us being the moral center of the universe that depends on intuitions should be regarded as, to use the technical term, extremely sus. If your argument against animals mattering much depends on the idea that beings who aren’t that smart, for instance, can’t matter that much, then you should take very seriously the possibility that such a hypothesis is rationalized moral error.
You should be suspicious of beliefs if those beliefs are what a third party would expect you to believe even if they weren’t true. If you conclude that the morally best action is whichever action you independently wanted to take, that’s a bit suspicious. So we should all be suspicious of the judgment that we are the moral centers of the universe. It is exactly the sort of thing that we’d be expected to believe even if it was false. You should be suspicious when Big Tobacco tells you that smoking is great for your health. In this case, you are Big Tobacco.
Then there is the further problem that the arguments against us being the center of the moral universe are very strong. For example, here is a judgment I hold: suffering is a bad thing. It is bad to hurt. I believe this because I have hurt before, and when I did, it seemed obvious it was bad. I recently banged my knee and was given a new chance to test the “hurting is bad” hypothesis. And yep, it was.
But if hurting is bad, then we are not the center of the moral universe. Because there are hundreds of thousands more fish than us—and fish can hurt, plausibly very intensely. There are 10,000 times more amphibians than us. If hurting is among the things that matter, the endless cries of the amphibians certainly drown out our own. And don’t even get me started on the insects.
Whenever I make the point that pain is a bad thing, lots of people get very confused. So let me clarify three things I don’t mean.
My claim, then, is very modest. It’s simply that barring exceptional circumstances, hurting a lot is a bad thing. It is bad to feel bad. The experience you have when you have a really bad headache, are very hungry, or get punched in the face is intrinsically unfortunate. All else equal, the world is worse if there’s more of it.
The argument, in a nutshell, is this:
Pain isn’t the only thing along these lines. I think pleasure is good too, and animals also experience very large quantities of pleasure. If any of this is correct then we are not the center of the moral universe.
Now, the attitude that most people have is that pain doesn’t matter much unless it’s experienced by humans. When rats are poisoned to death, no one cares much. But this seems like a very untenable position.
Pain is bad because of how it feels. When I have a bad headache, and it feels bad, I don’t think “ah, this detracts from the welfare of a member of a sapient species.” No, I think it’s bad because it hurts. So if animals can hurt too—if they, in the aggregate, hurt millions of times more than us—then they matter. And given that there is very compelling scientific evidence that animals feel pain and can hurt like us, what happens to them matters a great deal. Even simple creatures can plausibly feel intense pain.
There are further challenges for the idea that humans matter vastly more in the aggregate than non-humans. One such challenge is the argument from marginal cases. It’s very hard to see what it is about animals that’s supposed to mean they matter vastly less than people. Is it their intelligence? There are people as unintelligent as animals (babies and some severely mentally disabled people).
Is it being biologically human? Well, if we discovered that a certain class of beings believed to be people weren’t biologically human (e.g. Pacific islanders), they’d still matter. Is it being either biologically human or smart? Well this would imply that if we learned some mentally disabled people weren’t biologically human, they’d cease mattering at all. That’s ridiculous. Your moral worth isn’t determined by your biological make-up.
All of this is to say that I think it’s very hard to maintain that we are the center of the moral universe. The arguments in its favor are suspiciously convenient, and rather weak. The arguments against are simple and powerful, appealing to principles that would be taken as obvious in any other setting. When you have to deny that extreme pain is a bad thing, and your only argument for it is that it would imply that your group—who you’re biologically trained to be biased in favor of—is less important than you naturally think, then you should seriously reconsider.
What should we take away from the falsity of the “center of the moral universe” hypothesis? Well, we should start caring a lot about what happens to non-humans animals. We should care about the fish starving and dying. We should care about the insects and shrimp being eaten alive and dying of disease—both on farms and in the wild. And we should be concerned about making sure that the future isn’t just good for people but is good for all of conscious creatures.
Right now, I’m not sure that we’re on track to do that. Most people would be happy spreading wild animal suffering to the stars. When designing a future, few care about making it go well for wild animals, because we don’t count their interests. But almost every conscious creature is a wild animal; to ignore them is to ignore almost everyone.
And the future could have very large numbers of digital minds. Digital minds are potentially much easier to create than biological organisms (if, indeed, they are possible). For this reason, in expectation, almost every conscious being in world history will be digital. If we don’t count their interests, we would err terribly. We would fail to count most of those who matter.
The most important insight—an insight which would make me very optimistic about the future if everyone internalized it—is that every conscious being matters. If you can feel, if there’s something it’s like to be you, then it matters how well your life goes. The possible neglect of that insight could cause hideous suffering on an intergalactic scale. Our prejudice against the non-human could be our most consequential moral blunder.