At the beginning of 2017, someone I deeply trusted said they thought AGI would come in 10 years, with 50% probability.
I didn't take their opinion at face value, especially since so many experts seemed confident that AGI was decades away. But the possibility of imminent apocalypse seemed plausible enough and important enough that I decided to prioritize investigating AGI timelines over trying to strike gold. I left the VC-backed startup I'd cofounded, and went around talking to every smart and sensible person I could find who seemed to have opinions about when humanity would develop AGI.
My biggest takeaways after 3 years might be disappointing -- I don't think the considerations currently available to us point to any decisive conclusion one way or another, and I don't think anybody really knows when AGI is coming. At the very least, the fields of knowledge that I think bear on AGI forecasting (including deep learning, predictive coding, and comparative neuroanatomy) are disparate, and I don't know of any careful and measured thinkers with all the relevant expertise.
That being said, I did manage to identify a handful of background variables that consistently play significant roles in informing people's intuitive estimates of when we'll get to AGI. In other words, people would often tell me that their estimates of AGI timelines would significantly change if their views on one of these background variables changed.
I've put together a framework for understanding AGI timelines based on these background variables. Among all the frameworks for AGI timelines I've encountered, it's the framework that most comprehensively enumerates crucial considerations for AGI timelines, and it's the framework that best explains how smart and sensible people might arrive at vastly different views on AGI timelines.
Over the course of the next few weeks, I'll publish a series of posts about these background variables and some considerations that shed light on what their values are. I'll conclude by describing my framework for how they come together to explain various overall viewpoints on AGI timelines, depending on different prior assumptions on the values of these variables.
By trade, I'm a math competition junkie, an entrepreneur, and a hippie. I am not an expert on any of the topics I'll be writing about -- my analyses will not be comprehensive, and they might contain mistakes. I'm sharing them with you anyway in the hopes that you might contribute your own expertise, correct for my epistemic shortcomings, and perhaps find them interesting.
I'd like to thank Paul Christiano, Jessica Taylor, Carl Shulman, Anna Salamon, Katja Grace, Tegan McCaslin, Eric Drexler, Vlad Firiou, Janos Kramar, Victoria Krakovna, Jan Leike, Richard Ngo, Rohin Shah, Jacob Steinhardt, David Dalrymple, Catherine Olsson, Jelena Luketina, Alex Ray, Jack Gallagher, Ben Hoffman, Tsvi BT, Sam Eisenstat, Matthew Graves, Ryan Carey, Gary Basin, Eliana Lorch, Anand Srinivasan, Michael Webb, Ashwin Sah, Yi Sun, Mark Sellke, Alex Gunning, Paul Kreiner, David Girardo, Danit Gal, Oliver Habryka, Sarah Constantin, Alex Flint, Stag Lynn, Andis Draguns, Tristan Hume, Holden Lee, David Dohan, and Daniel Kang for enlightening conversations about AGI timelines, and I'd like to apologize to anyone whose name I ought to have included, but forgot to include.
Table of contents
As I post over the coming weeks, I'll update this table of contents with links to the posts, and I might update some of the titles and descriptions.
Humans can perform intellectual feats that appear qualitatively different from those of other animals, but are our brains really doing anything so different?
To what extent is the part of our brain responsible for higher-order functions like sensory perception, cognition, and language, uniformly composed of general-purpose data-processing modules?
How much are our innate cognitive capacities just shortcuts for learning?
To what extent are our innate cognitive capacities (for example, a pre-wired ability to learn language) crutches provided by evolution to help us learn more quickly what we otherwise would have been able to learn anyway?
Are mammalian brains all doing the same thing at different levels of scale?
Are the brains of smarter mammals, like humans, doing essentially the same things as the brains of less intelligent mammals, like mice, except at a larger scale?
How simple is the simplest brain that can be scaled?
If mammalian brains can be scaled, what's the simplest brain that could? A turtle's? A spider's?
How close are we to simple biological brains?
Given how little we understand about how brains work, do we have any reason to think we can recapitulate the algorithmic function of even simple biological brains?
What's the smallest set of principles that can explain human cognition?
Is there a small set of principles that underlies the breadth of cognitive processes we've observed (e.g. language, perception, memory, attention, and reasoning), similarly to how Newton’s laws of motion underlie a breadth of seemingly-disparate physical phenomena? Or is our cognition more like a big mess of irreducible complexity?
How well can humans compete against evolution in designing general intelligences?
Humans can design some things much better than evolution (like rockets), and evolution can design some things much better than humans (like immune systems). Where does general intelligence lie on this spectrum?
Tying it all together, part I
My framework for what these variables tell us about AGI timelines
Tying it all together, part II
My personal views on AGI timelines