[This is mostly posted as a some thoughts I wanted to check. I apologize that its messy and not complete. I needed to get something out there.]
This post explains the reasons why I think the probability of AGI killing everyone in the next few decades is very low, at least compared to what Yudkowsky argues.
1.1 By Far, Most Progress Toward AGI comes from LLMs
LLMs have offered such a huge boost towards AGI because they bootstrap the ability to reason by mimicing human reasoning.
Achieving this level of intelligence or even general knowledge through RL seems hardly more plausible now than at the inception of RL. Impressive progress has been made (Deepmind's AdA learning... (read 2978 more words →)