It seems that there are two points of particular relevance in predicting AGI timelines: (i) the expectation, or the point at which the chance of AGI is believed to be 50% and (ii) the last date as of which the chance of AGI is believed to be insignificant.
For purposes of this post, I am defining AGI as something that can (i) outperform average trained humans on 90% of tasks and (ii) will not routinely produce clearly false or incoherent answers. (I recognize that this definition is somewhat fuzzy with trained and tasks both being terms susceptible to differing interpretations and difficulty in application; AGI, like obscenity, lends itself to a standard... (read 204 more words →)
It's not that you have only one chance to succeed. It's that you can only lose once in a repeated game. You successfully align ASI A1. The following year you have ASI B2, which is smarter than ASI A1. Was the method of aligning A1, sufficient to align B2? The good news is you have had six months to improve your alignment methods. The bad news is that if you fail, your prior success doesn't matter. This happens again the following year with ASI C3.