Ajeya Cotra’s Forecasting Transformative AI with Biological Anchors, to my knowledge, represents the most serious effort to predict the arrival of transformative AI - even if it’s not attempting to pinpoint the exact instant that we’ll get transformative AI, it posits an upper bound on the amount of time until then (Holden Karnofsky’s post clarifies the difference). 

The report’s methodology, summarized by Scott Alexander, is:

1. Figure out how much inferential computation the human brain does.

2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number.

3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large.

4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain.

5. Figure out what year that is.

There are some other biological anchors that the report also considers, besides the human brain compute estimate, such as drawing a comparison between the parameter count of a transformative AI model and the parameter count of the human genome, or the amount of compute used by all animal brains over the course of evolution - I don’t place a ton of weight on these anchors myself, since I find them uninformative, and I tend to think the strength of this report lies in assuming that the current deep learning paradigm will lead to transformative AI and trying to work backward from there. 

Is it actually reasonable to assume that deep learning might produce transformative AI? I think so. Mark Xu’s model here is compelling regarding the differences between different AI algorithms/paradigms in terms of their reliance on computing power: each algorithm seems to have a certain “effective compute” regime within which scaling up computing power leads to predictable increases in capabilities and beyond which capability gains stall out or become vanishingly small. It seems increasingly likely to me that we’re far from exiting the effective computing regime of modern neural networks - I think the results from e.g. the Chinchilla paper support the fact that we have more juice to squeeze from models very similar to the ones we’ve already trained (PaLMGatoDALL-E, etc.). Furthermore, these models already seem to be on the cusp of having real societal impact. Copilot doesn’t seem to be terribly far from something that would become a part of every professional software engineer’s toolkit, and it feels like DALL-E 2 could be scaled up into something that produces bespoke illustrations for various needs on demand. 

Given that, I think the most compelling objection to the utility of the Biological Anchors forecast is the claim that the development of human intelligence by evolution gives us no information about the development of AI. I take this to be the thrust of Eliezer’s critique, aptly summarized by Adam Shimi

My interpretation is that he is saying that Evolution (as the generator of most biological anchors) explores the solution space in a fundamentally different path than human research.  So what you have is two paths through a space. The burden of proof for biological anchors thus lies in arguing that there are enough connections/correlations between the two paths to use one in order to predict the other.

In his piece, Yudkowsky is giving arguments that the human research path should lead to more efficient AGIs than evolution, in part due to the ability of humans to have and leverage insights, which the naive optimization process of evolution can't do. He also points to the inefficiency of biology in implementing new (in geological-time) complex solutions. On the other hand, he doesn't seem to see a way of linking the amount of resources needed by evolution to the amount of resources needed by human research, because they are so different.

If the two paths are very different and don't even aim at the same parts of the search space, there's nothing telling you that computing the optimization power of the first path helps in understanding the second one.

I think Yudkowsky would agree that if you could estimate the amount of resources needed to simulate all evolution until humans at the level of details that you know is enough to capture all relevant aspects, that amount of resources would be an upper bound on the time taken by human research because that's a way to get AGI if you have the resources. But the number is so vastly large (and actually unknown due to the "level of details" problem) that it's not really relevant for timelines calculations.

I find this argument partially convincing, but not entirely so. I don’t agree that “the two paths…don’t even aim at the same part of the search space,” since it seems to me like we’ll be optimizing AI for criteria developed by our understanding of human intelligence. If we’re aiming for human-level, and eventually superhuman, AI capabilities, it seems likely to me that we’re trying to optimize for desiderata not completely uncorrelated with those of evolution. A simple example is with language models - if the goal of GPT-N is to respond to any text prompt like (super)human would, we’re obviously judging it by its ability to meet (super)human language benchmarks.

That said, I think it’s reasonable to not update strongly off of this report, especially if recent (at the time) AI progress had already made you update to shorter or longer AI timelines. Personally, since I find a Deep Learning Based Development Model (and consequently a Deep Learning Based Threat Model) to be my modal prediction for future AI progress, this report helps provide grounding for my personally short-ish timelines (how short they are depends on who you’re talking to!). I plan to address my personal timelines in an upcoming post of this sequence.


 

New to LessWrong?

New Comment