I'm a trained rationalist and all the things I've read precedently about AI being an existential risk were bullshit.
But I know the Lesswrong community (which I respect) is involved in AI risk.
So where can I find a concise, exhaustive list of all sound arguments pro and con AGI being likely an existential risk?
If no such curated list exist, are people really caring about the potential issue?
I would like to update my belief about the risk.
But I suppose that most people talking about AGI risk have not enough knowledge about what technically constitute an
I'm a trained rationalist and all the things I've read precedently about AI being an existential risk were bullshit. But I know the Lesswrong community (which I respect) is involved in AI risk. So where can I find a concise, exhaustive list of all sound arguments pro and con AGI being likely an existential risk? If no such curated list exist, are people really caring about the potential issue?
I would like to update my belief about the risk. But I suppose that most people talking about AGI risk have not enough knowledge about what technically constitute an