Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs.
Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.
Though I've spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I'm probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.
After significant reflection what I'm trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).
Furthermore, there are a lot of edge cases of humanity where people can't learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn't matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren't necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.
Vegans as a general category don't unnecessarily harm and certainly don't eat insects either. I'm not just focused on the diet actually.
Come to think of it, what are we even arguing about at this point? I didn't understand your emoticon there and got thrown off by it.
Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren't in general.
I don't see why. Jainism is far from the only philosophy associated with veganism.
Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect
Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.
Do you think that animals can suffer?
Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?
Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.
More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).