I would like to compile a list of the AI doom scenarios that most people (especially politicians) will understand, and will agree that the scenarios are realistic and facts-based. A few examples:
What are some other such scenarios?
It's worth realizing that a lot of "this doesn't seem like a problem" reaction from politicians and "the public" is actually cover for "I don't want this to be a problem, and I see lots of visible and immediate harm from proposed solutions". That second part is the True Objection - without a relatively painless and/or near-guaranteed solution, there's no incentive to acknowledge the problem.
I think https://scottaaronson.blog/?p=7266 might be a good overview. It is kind of pointless to focus on specific ways everyone dies, since it is easy to argue with each specific one. The whole point is that Doom is disjunctive. Like trying to dam or plug a single channel of a river's delta, there will be another stream flowing somewhat differently to flood the basin, and the process is adversarial and anti-inductive.
I don't think it is pointless to focus on specific ways everyone dies, unless there is a single strategy that addresses every possible way everyone dies.
If FOOM isn't likely but something like this is likely, it seems really unlikely to me that the approach of "continue to focus on strategies that rely on a single agent having a high level of control over the world" is still optimal (or, more accurately, it's probably still a good idea to have some people working on that but not all the people).
I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I'll add them to the website.