I like the story, but (spoilers):
Surely the central premise is not true in our world? Many animals are kind to non-kin, and this seems to continue alongside increased animal intelligence. I don't see why the default path to higher intelligence would not look like homo sapiens, where the initial ancestors capable of forming a society are "stupid" and don't optimize completely for genetic fitness, allowing pro-social patterns to take hold.
I think that would require text comprehension too. I guess it's an interesting question if you can build an AI that can comprehend text but not produce it?
Rather less useful to me personally as a software developer.
Besides that, I feel like this question is maybe misleading? If ex. Google built a new search engine that could answer queries like its current AI-powered search summaries, or like ChatGPT, wouldn't that have to be some kind of language model anyway? Is there another class of thing besides AGI that could perform as well at that task?
(I assume you're not suggesting just changing the pricing model of existing-style search engines, which already had a market experiment (ex. Kagi) some years ago with only mild success.)
No, although if the "juicy beings" are only unfeeling bugs, that might not be as bad as it intuitively sounds.
There's a wrinkle to my posts here where partly I'm expressing my own position (which I stated elsewhere as "I'd want human-like sapients to be included. (rough proxy: beings that would fit well in Star Trek's Federation ought to qualify)") and partly I'm steelmanning the OP's position, which I've interpreted as "all beings are primary sources of values for the CEV".
In terms of how various preferences involving harming other beings could be reconciled into a CEV: yeah it might not be possible. Maybe the harmed beings are simulated/fake somehow? Maybe animals don't really have preferences about reality vs. VR, and every species ends up in their own VR world...
Ah, if your position is "we should only have humans as primary sources of values in the CEV because that is the only workable schelling point", then I think that's very reasonable. My position is simply that, morally, I think that schelling point is not what I'd want. I'd want human-like sapients to be included. (rough proxy: beings that would fit well in Star Trek's Federation ought to qualify)
But of course you'd say it doesn't matter what I (or vegan EAs) want because that's not the schelling point and we don't have a right to impose our values, which is a fair argument.
I admit:
But I think the concept has staying power because it points to a practical idea of "the AI acts in a way such that most humans think it mostly shares their core values".[1] LLMs already aren't far from this bar with their day-to-day behavior, so it doesn't seem obviously impossible.
To go back to agreeing with you, yes, adding new types of beings as primary sources of values to the CEV would introduce far more conflicting sets of preferences, maybe to the point that trying to combine them would be totally incoherent. (predator vs. prey examples, parasites, species competing for the same niche, etc etc.) That's a strong objection to the "all beings everywhere" idea. It'd certainly be simpler to enforce human preferences on animals.
I think of this as meaning the AI isn't enforcing niche values ("everyone now has to wear Mormon undergarments in order to save their eternal soul"), is not taking obviously horrible actions ("time to unleash the Terminators!"), and is taking some obviously good actions ("I will save the life of this 3-year-old with cancer"). Obviously it would have to be neutral on a lot of things, but there's quite a lot most humans have in common.
No I'm saying it might be too late at that point. The moral question is "who gets to have their CEV implemented?" OP is saying it shouldn't be only humans, it should be "all beings everywhere". If we implement an AI on Humanity's CEV, then the only way that other sapient beings would get primary consideration for their values (not secondary consideration where they're considered only because Humanity has decided to care about their values) would be if Humanity's CEV allows other beings to be elevated to primary value sources alongside Humanity. That's possible I think, but not guaranteed, and EAs concerned with ex. factory farming are well within their rights to be concerned that those animals are not going to be saved any time soon under a Humanity's CEV-implementing AI.
Now, arguably they don't have a right as a minority viewpoint to control the value sources for the one CEV the world gets, but obviously from their perspective they want to prevent a moral catastrophe by including animals as primary sources of CEV values from the start.
Edit: confusion clarified in comment chain here.
I think you've misunderstood what I said? I agree that a human CEV would accord some moral status to animals, maybe even a lot of moral status. What I'm talking about is "primary sources of values" for the CEV, or rather, what population is the AI implementing the Coherent Extrapolated Volition of? Normally we assume it's humanity, but OP is essentially proposing that the CEV be for "all beings everywhere", including animals/aliens/AIs/plants/whatever.
I agree that in terms of game theory you're right, no need to include non-humans as primary sources of values for the CEV. (barring some scenarios where we have powerful AIs that aren't part of the eventual singleton/swarm implementing the CEV)
But I think the moral question is still worthwhile?
Spoiler block was not supposed to be empty, sorry. It's fixed now. I was using the Markdown spoiler formatting and there was some kind of bug with it I think, I reported it to the LW admins last night. (also fwiw I took the opportunity now to expand on my original spoilered comment more)