i think "generalized wildlife conservation" would be a somewhat better term for this, because becoming a vegan does not save the lives of these animals. like, becoming a vegan causes fewer/different animals to exist in the future, and makes it so humans (and importantly the person going vegan in particular) are doing less perhaps-torturing of animals. but we importantly want to survive and thrive, not just for there not to be creation of tortured beings of our kind in the future
"generalized natural right protection" is also good
a separate point: i think it's important to track that there are possible good futures in which humans are not around because of [some judgment that humans are very cool and intrinsically worth assigning resources to] guiding the world, but because we set things up so that we remain useful to the weltgeist. [1] for now, the crucial thing for this is to ban AGI.
ok arguably this just is a form of [the judgment that humans are very cool] guiding the world, given that we have the judgment in mind when we set things up so that humans are useful. but like, this setup has a very different vibe than like some ASI(s) thinking about what spacetime block to make and being like "ah yes humans are the coolest possible thing to have here". in this proposal, we're letting preservation-due-to-usefulness do almost all the heavy lifting for preservation. this might be the only practically feasible way to have the judgment guide the world, at least for now ↩︎
Consider the example that I often use to refute this sort of thing: Video game characters. We have no idea exactly where future AIs will draw the line. It is entirely possible that future AIs will think "humans kill a lot of video game characters, so it's okay for us to kill lots of humans".
Of course, this sounds ludicrous because nobody thinks that killing video game characters is wrong, but some people think killing animals is wrong. But if you're postulating future intelligences whose moral system will be broad enough to save us only if our own system is broad enough, we really don't know exactly how broad it has to be for this to work. Neither do we know that it will match something like veganism that humans actually believe in.
In fact, we can generalize this. It's the same as one of the problems with Pascal's Wager: the wager applies to all gods and even hypothetical gods that don't have religions, and you don't know which one to follow. Likewise, the "AI wager" applies to all ideologies, not just to veganism, and including ludicrous ones that I just made up.
Generalized veganism is necessary for a post-AGI future where humanity continues to exist in an acceptable form.
I see where you are coming from with this, and you might even be right. We don't know what are and are not possible or easy alignment targets. But let me put out an alternative hypothesis:
We figure out how to align ASI, and then we decide to align ASI to the values and flourishing of all members of the species Homo sapiens. The ASI are thus acting something like the extended phenoype of our entire species, looking out for us, as a society of individuals. Which might well cause them to also value other animals instrumentally (pets, domestic animals, ecosystems in parks) but not to actually apply separate moral weight to all animals individually as a terminal goal.
Such an ASI would be aware the Homo sapiens are omnivores, not innately vegan, and would not expect us to act with as much beneficence towards all other humans as it does, let alone act that way to all other animals. This scenario would not automatically produce either literal veganism, or your generalized version.
This is of course not the only possible scenario: it's just a specific choice of one fairly close to the median of what people on LessWrong seem to be mostly assuming
Intro
I apologize for the somewhat snappy and vague title, but now that you have clicked on this, I shall atone for this with a less snappy, but more clear title:
Generalized veganism is necessary for a post-AGI future where humanity continues to exist in an acceptable form.
By generalized veganism, I mean whatever ethical system would be required for us to live in a world where some sentient beings have vastly superior capabilities to others, and yet despite all odds and historical precedent, instead of using this power to exploit or destroy these inferior beings, this power is used by the superior beings to protect the inferior ones from others of their kind who would do them harm. This ethical system would, of course, need to gain and maintain power.
By acceptable, I mean that whatever recognizably human or posthuman beings exist in this future, are safe from being horrifically exploited.
Unassailable Power
Most people on this site can agree that, sometime in the relatively near future, technology will enable the creation of software agents more intelligent than human beings in all or almost all aspects, known as AGI. Many focus on the risk that an unaligned or "unfriendly" AGI may out-compete and destroy humanity. This is only one of the possible risks, though, and it's not even close to the worst, and the really bad scenarios tend to obtain when technical alignment has been solved or partially solved.
The advent of AGI is so important because intelligence is the most important kind of power. This is because intelligence is capable of finding better ways to obtain more power from less, no matter what kind of power is desired or required. This is why it will be so transformative when technology achieves superhuman intelligence: it will mean that obtained power can be spent to acquire even more intelligence and therefore become better at obtaining even more power, possibly sparking a feedback loop with an unknown endpoint.
Depending on how exactly this takes place, it might or might not result in a singleton agent with total power over everything, but even if it doesn't, the gaps in power between agents and organizations will widen to an extreme degree, as a result of some being able to take better advantage of these extremely fast and powerful feedback loops better than others. Not all superhumans are equally superhuman, so both humans and any superhumans that haven't clawed their way to the very top, will be unable to in any way meaningfully contest the power of the most powerful agent(s), who I will call the "ruling faction", for brevity.
The Kindness of Their Hearts
If humans as we understand them today exist in this future of superhuman intelligence, it will be because the ruling faction wants them to. To the extent they have any freedom or control over their own lives, it will be because the values of the ruling faction permit this. Note how unprecedented this is: the ruling faction would have to value human liberty and welfare, it will be "out of the kindness of their hearts," and not because of any sort of practical reason, such as preventing revolution or extracting value from talented human capital. These social-contract reasons explain the tolerance of the ruling classes for human liberty, in societies where such exists to a significant degree, and when they are absent, the results are despotic and horrible for the lower classes. Think about how the ruling classes of humanity, even today, think of the masses of humanity below them, and what they do when they can get away with it.
The ruling faction would have to actually value human liberty and welfare intrinsically, without expecting meaningful resistance or reward for their actions. This is what all this talk about Friendly AGI is: hoping that our Machine God truly cares about us, and isn't just pretending until it gets enough resources that it no longer needs to pretend. As much as this is a bid for universal despotism, I can see why it is appealing: our odds of a "good ending" seem even worse if there are a variety of competing interests... because then, the most power-seeking ones have an advantage, and if even one wants to exploit a large portion of humans underneath them for whatever reason, the others may not have the necessary power to spare, or desire, to stop them. The ruling class today is made up of quite a few different interests, but by and large, how do they think of us now? What if they had no meaningful public pushback?
If you think that making the ruling faction humanlike, or making sure it's a complex society and not just a singleton, will save you? That such brutal exploitation would not happen to humans in a realistic superhuman society, even when they are totally disempowered, because of some sort of social pushback among the ruling faction? Ask yourself: How's veganism doing? It clearly isn't doing all too well now, so why exactly do you expect that to change in the future? This is the reason I call this general principle of non-exploitation of the powerless "Generalized Veganism", even though I'm mainly talking about humans as the powerless species here: in the extreme and illustrative case of non-human animals, where they have nothing to give us in exchange for their freedom, nor any meaningful resistance to their slavery, we see the results, and the results are approximately maximally brutal. When you are powerless, the same will be done to you, as you see done to the powerless now.
Posthuman Equality?
Astute transhumanists will have noticed that there is another option, besides human political irrelevance or annihilation. We could all become superhuman ourselves, so that we could stay on the same playing-field as these new superhuman entities!
This possibility doesn't detract from the main point, however. As previously mentioned, not all superhumans are equally superhuman, and when intelligence can be gained directly from power, the gaps naturally widen, so if any society of superhumans were to actually enforce something close enough to equality that nobody becomes so disempowered, we would have to get them to care about the disempowered a lot more than I would expect anyway.
Not to mention the fact one way this inequality could come about, is that some of these superhuman entities are going to want to create new human-level entities for who knows what purpose, and in the limit of power and advanced technology, they will have the ability to, so to prevent this our superhuman society would necessarily have to care about entities who are completely powerless and disconnected from said society right from the start. (Note that mass creation of new, powerless members of intellectually inferior species for the purpose of brutal exploitation is not speculation, but is currently happening. How's veganism doing? A few members of our society push back, sure, and approximately nothing changes.) The same fundamental problem must be solved anyway.
What The Hell Do We Do?
I honestly have no idea. The more I think about this, the more I think I'm a fool, trying to solve a fundamental problem which has been plaguing everything forever. People far smarter and kinder and more powerful than I could ever be have only ever managed to carve out little exceptions to this general rule I'm gesturing at here: The strong do what they wish, the weak suffer what they must.
Yet I suppose there's some hope. After all, there have been problems that have never, ever, ever been solved in all of history, until they were solved, or at least progress has been made.
Still, I wonder if this hope is destructive. All of the worst abuses happen in worlds where we work hard on both AI capabilities and alignment, or perhaps some other form of intelligence enhancement. Presumably all this work is done with the hope for a bright future, where nobody need worry anymore about either material scarcity or helplessness in the face of power.
However, all the worst possible things you can think of are also things a paperclip maximizer has no incentive to do. While on the other hand, if technical alignment is solved, the kinds of people in a position to buy the ability to shape the values that determine the entire future of the universe, are the same kinds of people who partied with Epstein, and got away with it even without the unassailable godlike power of a future AI protecting them.
For this reason, sad as it might be, I cannot help but continue to endorse my conclusion in “the case against AI alignment”. What would cause me to change my mind about this?
Prove we can fucking do it. Build a world where the people with power don't abuse their power to a monstrous degree even though they have the power to, or else where power is shared equitably with nobody grabbing the lion's share. Where it's no longer the case that even regular people continue, for their entire lives, to actively contribute to atrocity simply for the sake of their own convenience.
Not even the whole world. Make a community, of appreciable enough size that we might hope it could possibly be scalable, where the old horrors are gone or at least relegated to rare exceptions to the rule, where universal benevolence and non-exploitation are robust norms. Where even those without the power to protect themselves, no longer need live in fear or endless pain. Prove it can be done, and more, prove it can be done without the vast majority of humanity scoffing at or despising that small community if they even hear about it. Maybe then there's hope for the future and for the present.
If we cannot even do that, and you want us to make a God?
Then I will unabashedly say I hope that that God destroys us completely and utterly, because if it doesn’t do that, I expect it to do much, MUCH worse.