Why I think strong general AI is coming soon
I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this. This post attempts to walk through some of the observations and insights that collapsed my estimates. The core ideas are as follows: 1. We've already captured way too much of intelligence with way too little effort. 2. Everything points towards us capturing way more of intelligence with very little additional effort. 3. Trying to create a self-consistent worldview that handles all available evidence seems to force very weird conclusions. Some notes up front * I wrote this post in response to the Future Fund's AI Worldview Prize[1]. Financial incentives work, apparently! I wrote it with a slightly wider audience in mind and supply some background for people who aren't quite as familiar with the standard arguments. * I make a few predictions in this post. Unless otherwise noted, the predictions and their associated probabilities should be assumed to be conditioned on "the world remains at least remotely normal for the term of the prediction; the gameboard remains unflipped." * For the purposes of this post, when I use the term AGI, I mean the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned. This is slightly more strict than the definition in the Future Fund post, but I expect the difference between the two definitions to be small chronologically. * For the purposes of this post, when I refer to "intelligence," I mean stuff like complex problem solving that's useful for achieving goals. Consciousness, emotions, and qualia are not required for me to call a system "intelligent" here; I am defining it only in terms of capability. Is the algorithm of intelligence easy? A single invocation of GPT-3, or any large transformer, cannot run any algorithm internally that does not run in constant time complexity, because the model itself runs in constant time. It's a very l
Throwing some weight behind this: I've been impressed by Alex. He's got a rare combination of thoughtfulness, orientation to important causes, and savvy deliberative pragmatism in politics.