I am largely convinced that p(doom) is exceedingly high if there is an intelligence explosion, but I'm somewhat unconvinced about the likelihood of ASI sometime soon.
Reading this, the most salient line of thinking to me is the following:
If we assume that ASI is possible at all, how many innovations as significant as transformers do we need to get there? Eliezer guesses '0 to 2', which seems reasonable. I have minimal basis to make any other estimate.
And it DOES seem reasonable to me to think that those transformer-level innovations are reasonably likely, g... (read more)
I am largely convinced that p(doom) is exceedingly high if there is an intelligence explosion, but I'm somewhat unconvinced about the likelihood of ASI sometime soon.
Reading this, the most salient line of thinking to me is the following:
If we assume that ASI is possible at all, how many innovations as significant as transformers do we need to get there? Eliezer guesses '0 to 2', which seems reasonable. I have minimal basis to make any other estimate.
And it DOES seem reasonable to me to think that those transformer-level innovations are reasonably likely, g... (read more)