I am largely convinced that p(doom) is exceedingly high if there is an intelligence explosion, but I'm somewhat unconvinced about the likelihood of ASI sometime soon.
Reading this, the most salient line of thinking to me is the following:
If we assume that ASI is possible at all, how many innovations as significant as transformers do we need to get there? Eliezer guesses '0 to 2', which seems reasonable. I have minimal basis to make any other estimate.
And it DOES seem reasonable to me to think that those transformer-level innovations are reasonably likely, given the massive amount of investment of time, effort, and resources into those problems. But this is (again) an entirely intuitive take.
So the p(next critical innovations to ASI) seems to be the most important issue here, and I would like to see more thoughts on that from those with more expertise. I guess they are absent because the question is simply too speculative?
I am largely convinced that p(doom) is exceedingly high if there is an intelligence explosion, but I'm somewhat unconvinced about the likelihood of ASI sometime soon.
Reading this, the most salient line of thinking to me is the following:
If we assume that ASI is possible at all, how many innovations as significant as transformers do we need to get there? Eliezer guesses '0 to 2', which seems reasonable. I have minimal basis to make any other estimate.
And it DOES seem reasonable to me to think that those transformer-level innovations are reasonably likely, given the massive amount of investment of time, effort, and resources into those problems. But this is (again) an entirely intuitive take.
So the p(next critical innovations to ASI) seems to be the most important issue here, and I would like to see more thoughts on that from those with more expertise. I guess they are absent because the question is simply too speculative?