One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.
I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely - in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally - as objects of study.
This applies to (e.g.) gold atom maximisers, with no shred of human values. I don't claim it for all superintelligences, though - or even 99% of those likely to be built.
I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space:
1) doesn't protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans;
2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.
As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty:
But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well.
Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.