What a compute-centric framework says about AI takeoff speeds
As part of my work for Open Philanthropy I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier? Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms. This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework. The framework h/t Dan Kokotajlo for writing most of this section This report accompanies and explains https://takeoffspeeds.com (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.) Takeoff simulation with Tom’s best-guess value for each parameter. The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines: Dan’s visual representation of Bio Anchors report 1. Compute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket. 1. The probability distribution is shown by the coloured blocks on the y-axis in the above figure. 2. Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this
Thanks, good point re the alignment constraints being complex here.
Re the Beren post: I agree with the post that the AI agents (/automated companies) involved in creating new businesses will be in a better position to pick up the best investments than entities that lack the access to and knowledge about new startup opportunities.
But it still seems like the AIs that do this could totally be investing on behalf of human principals? You only need a fairly minimal sort of alignment to ensure that the AI actually gives all the money it makes back to the human.