# -3

A few facts:

• Solving intelligence is the most important problem we face because intelligence can be used to solve everything else
• We know it's possible to solve intelligence because evolution has already done it
• We've made enormous progress towards solving intelligence in the last few years

Given our situation, it's surprising that the broader world doesn't share LessWrong's fascination with AGI. If people were giving it the weight it deserved, it would overshadow even the global culture war which has dominated the airwaves for the last 5 years and pulled in so many of us.

And forget, for a moment, about true intelligence. The narrow AIs that we already have are going to shake up our lives in a big way in the immediate future. The days of the art industry have been numbered ever since the announcement of DALLE-1.

Unfortunately, despite my general enthusiasm for machine learning, I have to take issue with the common sentiment on here that AGI is right around the corner. I think it's very unlikely (<10% chance) that we'll see AGI within the next 50 years, and entirely possible (>25% chance) that it will take over 500 years.

To be precise, let's use the Future Fund's definition of AGI:

For any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for 25/hr or less. # Wishful thinking My core thesis is that the people on this site are collectively falling victim to wishful thinking. Most of them really want to experience transformative AGI for themselves, and this is causing them to make unreasonably aggressive predictions. The first clue that wishful thinking might be at play is that everyone's forecasts seem to put AGI smack in the middle of their lifetimes. It's very common to see predictions of AGI in 20 years, but I don't think I've ever seen a prediction in the 200-500 year range. This is strange, because in the broader context of human progress, AGI in 2040 and AGI in 2240 are not significantly different. Those years are side-by-side in a zoomed-out view of human history. I'm reminded of Ray Kurzweil and Aubrey de Grey, who conveniently predict that immortality will be achieved right around the time they reach old age. Both of those men have a long enough history of predictions that we can now say pretty confidently they've been overoptimistic. More generally, it's hard to find anyone who both: • Feels that aging is bad; and • Considers it extremely unlikely that aging will be solved in their lifetime even though both of those seem like the obvious, default positions to take. When we desperately want to see a transformative technology, be it immortality or AGI, there is a strong pressure on us to believe that we will. But the universe doesn't care about what we want. You could die 64 and a half years before the creation of AGI, and no laws of physics (or laws of scaling) would be violated. # But isn't LessWrong afraid of AGI? But hang on, the people on LessWrong don't want to experience AGI because they believe it has a good chance of destroying the world. So how could AGI in the near future possibly constitute a wishful thinking scenario? If you don't think you would like to witness AGI in your lifetime, you may want to look closer at your feelings. Imagine you're talking to a sage who knows humanity's future. You ask him about the arrival of AGI. Which of these responses would you prefer? 1. AGI doesn't appear until a few hundred years after your death. 2. AGI appears in 2040, and the sage can't see what happens beyond that point. For me, the first creates a feeling of dread, while the second creates a feeling of excitement. Please genuinely introspect and decide which revelation you would prefer. If it is indeed the second, and you also believe there is a decent chance of AGI in your lifetime, then you must be very careful that you are not falling prey to wishful thinking. As a specific example of what I suspect is a bit of cognitive dissonance, look at the recent post on AGI by porby, which predicts AGI by 2030. I loved reading that post because it promises that the future is going to be wild. If porby is right, we're all in for an adventure. Based on the breathless tone of the post, I would surmise that porby is as excited by his conclusion as I am. For example, we have this excerpt: This is crazy! I'm raising my eyebrows right now to emphasize it! Consider also doing so! This is weird enough to warrant it! Would you have predicted this in 2016? I don't think I would have! Does this strike you as someone who dreads the arrival of AGI? It seems to me like he is awaiting it with great anticipation. But then in the comments on the post, he says that he hopes he's wrong about AGI! If you're reading this porby, do you really want to be wrong? One of the reasons I come to this site is because, when I read between the lines, it reassures me that we are headed for an idyllic--or at least very interesting--future. I don't think that makes me a rare exception. I think it's more likely that there is an echo chamber in effect, where everyone reinforces each other's hopes for a transhuman future. And if that is indeed the case, then this is an environment ripe for wishful thinking about the pace of technological progress. # The object-level question The wishful thinking argument may be dissatisfying because it doesn't explain why AGI is likely to take more than 50 years. It merely explains why such a large number of people in these circles are saying otherwise. I will now directly explain why I do not expect AGI to arrive in the next several decades. There are two paths by which we can reach AGI. We can study and replicate the core features of the brain (the neuroscience approach) or we can keep incrementally improving our machine learning models without explicitly copying the brain (the machine learning approach). ## The neuroscience approach The neuroscience approach is, as far as I can tell, extremely unlikely to bear fruit within the next 50 years. My impression of neuroscientists is that they understand the brain decently well at a low level. Here's how the neurons are wired together; here are the conditions under which they fire; here are the groups of neurons that tend to fire when a person is afraid or dreaming or speaking. But they can't say anything coherent about its high-level operation, which is where the secrets of intelligence are really contained. I will also note the air of mystery that hangs thick around psychoactive drugs and psychiatric disorders. How does general anesthesia induce unconsciousness? Why do anti-depressants work? What specifically is going wrong in a brain with anxiety or BPD or schizophrenia? Ask any of these questions and you will get a mumbled answer about The gaba-D3-transmitters and the dopa-B4-receptors. It's like they're trying to figure out how a computer creates a web browser, and all they can tell you is that when you open a new tab, a lot of electrical signals begin to pass through the wires in the F-53 region of the motherboard. This is not what a mature scientific field looks like. And even what little *is* known is constantly changing. It seems like every month I see a new article about how some discovery has rocked the field of neuroscience, and actually the brain is doing something far more sophisticated than they thought. Biology is hard. Our understanding of the rest of the human body isn't much better. Medicine is still mostly guesswork. If you go to three doctors, you'll get three different diagnoses. We can't cause a body to lose fat or grow hair despite massive financial incentives to do so. We don't even understand C. Elegans well enough to simulate it, and that is a microscopic worm with 302 neurons and less than a thousand cells. Does this seem to you like a civilization that is on the verge of solving the greatest problem in biology? ## The machine learning approach Neuroscientists are not likely to hand us the key to AGI any time soon, so if we want it by 2072, we're going to have to find it ourselves. This means we try things out, think really hard, adjust our approach, and repeat. Eventually we find the correct series of novel insights that lead to AGI (John Carmack predicts we will need 6). Notice that this a math problem, not an engineering problem. The internet was an engineering problem. The technology was essentially there and it was just a matter of putting it together in the right way. The Manhattan Project and the metaverse are engineering problems. Solving intelligence is more like proving the Riemann Hypothesis, where we don't have any clue how much work it's going to take. This is what the arguments in favor of imminent AGI ignore. They just look at the graph of our available computing power, find where it crosses the power of the human brain, and assume we will get AGI around that date. They're sweeping all of the math work--all of the necessary algorithmic innovations--under the rug. As if that stuff will just fall into our lap, ready to copy into PyTorch. But creative insights do not come on command. It's not unheard of that a math problem remains open for 1000 years. Geoffrey Hinton--and a cohort of other researchers--has spent the last 50 years trying to figure out intelligence with only partial success. Physicists have been seeking a theory of everything for hundreds of years and have not yet found one. Endeavors like these require us to explore many branches before we find the right one. We can easily lose 30 years to a seductive dead end. Or a field can fall into a multi-decade winter until it is revived by a maverick who overturns the prevailing orthodoxy. 50 years is the blink of an eye as far as these grand challenges of understanding go. It seems bizarrely overconfident to expect total victory in such a short timeframe. And that's just the mathematical/theoretical part of the problem. Once we've guessed an algorithm that will in fact produce an AGI, it may still be prohibitively hard to run. We may not have enough data. Or it may need to be trained in a very complicated simulator, or worse, in the real world where materials are expensive and the speed of time is fixed. ## Current state of the art Some people say that we've already had the vast majority of the creative insights that are needed for AGI. For example, they argue that GPT-3 can be made into AGI with a little bit of tweaking and scaling. Rub the stars out of your eyes for a second. GPT-3 is a huge leap forward, but it still has some massive structural deficiencies. From most to least important: 1. It doesn't care whether it says correct things, only whether it completes its prompts in a realistic way 2. It can't choose to spend extra computation on more difficult prompts 3. It has no memory outside of its current prompt 4. It can't take advantage of external resources (like using a text file to organize its thoughts, or using a calculator for arithmetic) 5. It can't think unless it's processing a prompt 6. It doesn't know that it's a machine learning model "But these can be solved with a layer of prompt engineering!" Give me a break. That's obviously a brittle solution that does not address the underlying issues. You can give your pet theories as to how these limitations can be repaired, but it's not worth much until someone actually writes the code. Before then, we can't know how difficult it will be or how many years it will take. It's possible that the whole paradigm behind GPT-3 is flawed in some basic way that prevents us from solving these problems, and we will only reach AGI when we go back and rethink the foundations. And maybe we will create a program with none of these flaws that is still lacking some necessary aspect of intelligence that I've failed to list. There are just too many unknowns here to be at all confident of AGI in the near future. There is perhaps a better case to be made that MuZero is nearing AGI. It has a different and arguably less serious set of limitations than GPT-3. But it has only been applied to tasks like Go and Atari. Compared to the task of, say, working as a software engineer, these are miniscule toy problems. With both GPT-3 and MuZero, there is a clearly a chasm yet to be crossed, and that chasm can hide any number of multi-decade subchallenges. Looking at the eye-watering distance between state of the art AI and human intelligence, I think it's unreasonable to assume we'll cross that chasm in the next 50 years. # Conclusion I regret having to take the side of pessimism for this article. It's really not my natural disposition when it comes to technology. Even though I don't anticipate living to see AGI, I will reiterate that we are in for quite a ride with narrow AI. We've already seen some true magic since the dawn of the deep learning revolution, and it only gets weirder from here. It's such a privilege to be able to witness it. Sorry if any of this came off as a bit rude. I know some people would prefer if I had skipped the psychoanalytic angle of wishful thinking, and just directly made my case for a longer AGI timeline. However, I think it's important, because when we look back in 2070 and AGI remains elusive, that will be the clearest explanation of what went wrong in the heady days of 2022. # -3 Mentioned in New Comment 22 comments, sorted by Click to highlight new comments since: Rub the stars out of your eyes for a second. GPT-3 is a huge leap forward, but it still has some massive structural deficiencies. From most to least important: 1. It doesn't care whether it says correct things, only whether it completes its prompts in a realistic way 2. It can't choose to spend extra computation on more difficult prompts 3. It has no memory outside of its current prompt 4. It can't take advantage of external resources (like using a text file to organize its thoughts, or using a calculator for arithmetic) 5. It can't think unless it's processing a prompt 6. It doesn't know that it's a machine learning model "But these can be solved with a layer of prompt engineering!" Give me a break. That's obviously a brittle solution that does not address the underlying issues. You can give your pet theories as to how these limitations can be repaired, but it's not worth much until someone actually writes the code. Before then, we can't know how difficult it will be or how many years it will take. 1. I predict this passage in particular will age very poorly. Let's come back to it in 2025. 2. You say "We can't know how difficult it will be or how many years it will take" Well, why do you seem so confident that it'll take multiple decades? Shouldn't you be more epistemically humble / cautious? ;) You say "We can't know how difficult it will be or how many years it will take" Well, why do you seem so confident that it'll take multiple decades? Shouldn't you be more epistemically humble / cautious? ;) Epistemic humility means having a wide probability distribution, which I do. The center of the distribution (hundreds of years out in my case) is unrelated to its humility. Also, the way I phrased that is a little misleading because I don't think years will be the most appropriate unit of time. I should have said "years/decades/centuries." Insofar as your distribution has a faraway median, that means you have close to certainty that it isn't happening soon. And that, I submit, is ridiculously overconfident and epistemically unhumble. Your argument seems to prove too much. Couldn't you say the same thing about pretty much any not-yet-here technology, not just AGI? Like, idk, self-driving cars or more efficient solar panels or photorealistic image generation or DALL-E for 5-minute videos. Yet it would be supremely stupid to have hundred-year medians for each of these things. [Edited to delete some distracting sentences I accidentally left in] Insofar as your distribution has a faraway median, that means you have close to certainty that it isn't happening soon. And insofar as your distribution has a close median, you have high confidence that it's not coming later. Any point about humility cuts both ways. Your argument seems to prove too much. Couldn't you say the same thing about pretty much any not-yet-here technology, not just AGI? Like, idk, self-driving cars or more efficient solar panels or photorealistic image generation or DALL-E for 5-minute videos. Yet it would be supremely stupid to have hundred-year medians for each of these things. The difference between those technologies and AGI is that AGI is not remotely well-captured by any existing computer program. With image generation and self-driving, we already have decent results, and there are obvious steps for improvement (e.g. scaling, tweaking architectures). 5-minute videos are similar enough to images that the techniques can be reasonably expected to carry over. Where is the toddler-level, cat-level, or even bee-level proto-agi? [Replying to this whole thread, not just your particular comment] "Epistemic humility" over distributions of times is pretty weird to think about, and imo generally confusing or unhelpful. There's an infinite amount of time, so there is no uniform measure. Nor, afaik, is there any convergent scale-free prior. You must use your knowledge of the world to get any distribution at all. You can still claim that higher-entropy distributions are more "humble" w.r.t. to some improper prior. Which begs the question "Higher entropy w.r.t. what measure? Uniform? Log-uniform?". There's an infinite class of scale-free measures you can use here. The natural way to pick one is using knowledge about the world. Even in this (possibly absurd) framework, it seems like "high-entropy" doesn't deserve the word "humble" -- since having any reasonable distribution means you already deviated by infinite bits from any scale-free prior, am I significantly less humble for deviating by infinity+1 bits? It's not like either of us actually started from an improper prior, then collected infinite bits one-by-one, and you can say "hey, where'd you get one extra bit from?" You can salvage some kind of humility idea here by first establishing, with only the simplest object-level arguments, some finite prior, then being suspicious of longer arguments which drag you far from that prior. Although this mostly looks like regular-old object-level argument. The term "humility" seems often counterproductive, unless ppl already understand which exact form is being invoked. There's a different kind of "humility" which is defering to other people's opinions. This has the associated problem of picking who to defer to. I'm often in favor, whereas Yudkowsky seems generally against, especially when he's the person being asked to defer (see for example his takedown of "Humbali" here). I'm often in favor, whereas Yudkowsky seems generally against, especially when he's the person being asked to defer (see for example his takedown of "Humbali" here). This is well explained by the hypothesis that he is epistemically superior to all of us (or at least thinks he is). Strong +1. See also Eliezer's comments on his Biological Anchors post for an expanded version of Daniel's point (search "entropy"). (Don't have time for a detailed response; off the top of my head:) Some people say that we've already had the vast majority of the creative insights that are needed for AGI. For example, they argue that GPT-3 can be made into AGI with a little bit of tweaking and scaling. Rub the stars out of your eyes for a second. GPT-3 is a huge leap forward, but it still has some massive structural deficiencies. From most to least important: 1. It doesn't care whether it says correct things, only whether it completes its prompts in a realistic way 2. It can't choose to spend extra computation on more difficult prompts 3. It has no memory outside of its current prompt 4. It can't take advantage of external resources (like using a text file to organize its thoughts, or using a calculator for arithmetic) 5. It can't think unless it's processing a prompt 6. It doesn't know that it's a machine learning model "But these can be solved with a layer of prompt engineering!" Give me a break. That's obviously a brittle solution that does not address the underlying issues. I don't think that GPT-3 can be made into an AGI with "a little bit of tweaking and scaling". But, I think something that's a scaled up instructGPT (IE, massive unsupervised pretraining on easy to collect data -> small amounts of instruction finetuning and RLHF) could definitely be transformative and scary. As a meta point, I'm also not sure that GPT-3 lacking a capability is particularly strong evidence that (instruct)GPT-4 won't be able to do it. For one, GPT-3 is a 2-year old model at this point (even the RLHF-finetuned instructGPT is 9 months old), and SOTA has moved quite a bit further beyond what it was when GPT-3 was made. Chinchilla and PaLM both are significantly better than GPT-3 on most downstream benchmarks and even better than instructGPT on many benchmarks for example. And we don't have public benchmarks for many of the RLHF'ed models that are deployed/will be deployed in the near future Responding to each of your points in turn: 1. This is true of GPT-3 (ie the original davinci) but kind of unimportant---after all, why should caring about saying "correct things" be necessary for transformative impact? It's also to some extent false of RLHF'ed (or other finetuned) models like instructGPT, where they do care about some reward model that isn't literally next token prediction on a web corpus. Again, the reward doesn't perfectly align with saying true things, but 1) it's often the case that the models have true models of things they won't report honestly 2) it seems possible to RLHF models to be more truthful along some metrics and 3) why does this matter? 2. I'm not super sure this is true, even as written. I'm pretty sure you can prompt engineer instructGPT so it decides to "think step by step" on harder prompts, while directly outputting the answer on easier ones. But even if this was true, it's probably fixable with a small amount of finetuning. 3. This is true, but I'm not sure why being limited to 8000 tokens (or however many for the next generation of LMs) makes it safe? 8000 tokens can be quite a lot in practice. You can certainly get instructGPT to summarize information to pass to itself, for example. I do think there are many tasks that are "inherently" serial and require more than 8000 tokens, but I'm not sure I can make a principled case that any of these are necessary for scary capabilities. 4. As written this claim is just false even of instructGPT: https://twitter.com/goodside/status/1581805503897735168 . But even if were certain tools that instructGPT can't use with only some prompt engineering assistance (and there are many), why are you so confident that this can't be fixed with a small amount of finetuning on top of this, or by the next generation of models? 5. Yep, this is fair. The only sense in which computation happens at all in the LM is when a prompt is fed in. But why is this important though? Why is thinking only when prompted a fundamental limitation that prevents these models from being scary? 6. I think this claim is probably true of instructGPT, but I'm not sure how you'd really elicit this knowledge from a language model ("Sampling can prove the presence of knowledge, but not its absence"). I'm also not sure that it can't be fixed with better prompt engineering (maybe even just: "you are instructGPT, a large language model serving the OpenAI API"). And even if was true that you couldn't fix it with prompt engineering, scaffolding, or finetuning, I think you'll need to say more about why this is necessary for scary capabilities. Again, I don't think that GPT is dangerous by itself, and it seems unlikely that GPT+scaffolding will be dangerous either. That being said, you're making a much stronger claim in the "Current state of the art" section than just "GPT-3 is not dangerous": that we won't get AGI in the next 50 years. I do think you can make a principled case that models in the foundational model + finetuning + prompt engineering + scaffolding regime won't be dangerous (even over the next 50 years), but you need to do more than list a few dubiously correct claims without evidence and then scoffing at prompt engineering. A lot of your post talks about an advanced GPT being transformative or scary. I don't disagree, unless you're using some technical definition of transformative. I think GPT-3 is already pretty transformative. But AGI goes way beyond that, and that's what I'm very doubtful is coming in our lifetimes. It doesn't care whether it says correct things, only whether it completes its prompts in a realistic way 1) it's often the case that the models have true models of things they won't report honestly 2) it seems possible to RLHF models to be more truthful along some metrics and 3) why does this matter? As for why it matters, I was going off the Future Fund definition of AGI: "For any human who can do any job, there is a computer program (not necessarily the same one every time) that can do the same job for25/hr or less." Being able to focus on correctness is a requirement of many jobs, and therefore it's a requirement for AGI under this definition. But there's no reliable way to make GPT-3 focus on correctness, so GPT-3 isn't AGI.

Now that I think more about it, I realize that definition of AGI bakes in an assumption of alignment. Under a more common definition, I suppose you could have a program that only cares about giving realistic completions to prompts, and it would still be AGI if it were using human-level (or better) reasoning. So for the rest of this comment, let's use that more common understanding of AGI (it doesn't change my timeline).

It can't choose to spend extra computation on more difficult prompts

I'm not super sure this is true, even as written. I'm pretty sure you can prompt engineer instructGPT so it decides to "think step by step" on harder prompts, while directly outputting the answer on easier ones. But even if this was true, it's probably fixable with a small amount of finetuning.

If you mean adding "think step-by-step" to the prompt, then this doesn't fully solve the problem. It still gets just one forward pass per token that it outputs. What if some tokens require more thought than others?

It has no memory outside of its current prompt

This is true, but I'm not sure why being limited to 8000 tokens (or however many for the next generation of LMs) makes it safe? 8000 tokens can be quite a lot in practice. You can certainly get instructGPT to summarize information to pass to itself, for example. I do think there are many tasks that are "inherently" serial and require more than 8000 tokens, but I'm not sure I can make a principled case that any of these are necessary for scary capabilities.

"Getting it to summarize information to pass to itself" is exactly what I mean when I say prompt engineering is brittle and doesn't address the underlying issues. That's an ugly hack for a problem that should be solved at the architecture level. For one thing, its not going to be able to recover its complete and correct hidden state from English text.

We know from experience that the correct answers to hard math problems have an elegant simplicity. An approach that feels this clunky will never be the answer to AGI.

It can't take advantage of external resources (like using a text file to organize its thoughts, or using a calculator for arithmetic)

As written this claim is just false even of instructGPT: https://twitter.com/goodside/status/1581805503897735168 . But even if were certain tools that instructGPT can't use with only some prompt engineering assistance (and there are many), why are you so confident that this can't be fixed with a small amount of finetuning on top of this, or by the next generation of models?

It's interesting to see it calling Python like that. That is pretty cool. But It's still unimaginably far behind humans. For example, it can't interact back-and-forth with a tool, e.g. run some code, get an error, check Google about the error, adjust the code. I'm not sure how you would fit such a workflow into the "one pass per output token" paradigm, and even if you could, that would again be a case where you are abusing prompt engineering to paper over an inadequate architecture.

(raises hand)

I think aging is very bad[1] but don't expect it to be a solved problem any time in or near my lifetime. (I'm in my early 50s, so that lifetime is likely shorter than that of most other people around here, but I think it's pretty unlikely to be solved in say the next 50 years.)

[1] I'm not so sure that very finite lifetimes are very bad, given other difficult-to-overcome constraints the human race is working under. I can imagine possible future worlds where (post-)humans typically live 1000 years and almost all deaths are voluntary, but those future worlds need a lot of other problems solved some of which are probably harder than aging.

Same. Aging is bad, don't expect it to be solved (escape velocity reached) in my lifetime.

I also agree that nearish AGI excites and whether deemed good (I'd welcome it, bet worth taking, though scary) or doom, far AGI is relatively boring, and that may psychologically contribute to people holding shorter timelines.

Third "fact" at the top of the original post "We've made enormous progress towards solving intelligence in the last few years" is somewhat refuted by the rest: if it's a math-like problem, we don't know how much progress toward AGI we've made in the last few years. (I don't know enough to hold a strong opinion on this, but I hope we have! Increase variance, the animal-human experience to date is disgustingly miserable, brutal, and stupid, do better or die trying.)

Third "fact" at the top of the original post "We've made enormous progress towards solving intelligence in the last few years" is somewhat refuted by the rest: if it's a math-like problem, we don't know how much progress toward AGI we've made in the last few years.

Yeah, it crossed my mind that that phrasing might be a bit confusing. I just meant that

• It's a lot of progress in an absolute sense, and
• It's progress in the direction of AGI.

But I believe AGI is so far away that it still requires a lot more progress.

I also believe that aging is very bad, and given no AGI and no genetic engineering for intelligence I'd expect no solution for it to happen.

I feel like you've significantly misrepresented the people who think AGI is 10-20 years away.

Two things you mention:

Notice that this a math problem, not an engineering problem...They're sweeping all of the math work--all of the necessary algorithmic innovations--under the rug. As if that stuff will just fall into our lap, ready to copy into PyTorch.

But creative insights do not come on command. It's not unheard of that a math problem remains open for 1000 years.

And with respect to scale maximialism, you write:

Some people say that we've already had the vast majority of the creative insights that are needed for AGI. For example, they argue that GPT-3 can be made into AGI with a little bit of tweaking and scaling..."But these can be solved with a layer of prompt engineering!" Give me a break. That's obviously a brittle solution that does not address the underlying issues.

So - AGI is not (imo) a pure engineering problem as you define it, in a sense of "We have all the pieces, and just need to put them together". Some people have suggested this, but a lot of sub-20-year timelines people don't believe this. And I haven't heard of anyone saying GPT-3 can be made into AGI with a bit of tweaking, scaling, and prompt engineering.

But I wouldn't call it a math problem as you define it either, where we have no idea how to make progress and the problem is completely unsolvable until suddenly it isn't. We have clearly made steady progress on deep learning, year after year, for at least the last decade. These include loads of algorithmic innovations which people went out and found, the same required innovations you claim we're "sweeping under the rug". We're not sweeping them under the rug, we're looking at the last ten years of progress and extrapolating it forward! We have solved problems that were thought impossible or highly intractable, like Go. We don't know exactly how long the path is, but we can definitely look back and think there is a pretty solid probability we're closer than we were last year. Maybe we need a paradigm shift to get AGI, and our current efforts will be dead ends. On the other hand, deep learning and transformers have both been paradigm shifts and they've happened in the last couple of decades - transformers are only a few years old. We could need two more paradigm shifts and still get them in the next 20 years.

The general argument I would make for <50 year timelines is this:

Over the last decade, we have been making incredibly fast progress, both in algorithms and in scaling, in deep learning.

Deep learning and transformers are both recent paradigm shifts that have led to huge capability increases in AI. We see no signs of this slowing down.

We have solved, or made massive progress on, multiple problems that people have previously predicted were highly intractable, despite not fundamentally understanding intelligence. (Go, natural language processing)

Given this, we can see that we've made fast progress, our current paradigms are scaling strongly, and we seem to have the ability to create paradigm shifts when needed. While this far from guarantees AI by 2040, or even 2070, it seems like there is a very plausible path to have AGI in this timeframe that I'd assign much more than 10% probability mass of.

Also, for what it's worth - I did your thought experiment. Option 1 feels me with a deep sense of relief, and Option 2 fills me with dread. I don't want AGI, and if I was convinced you were correct about <10% AGI by 2070, I would seriously consider working on something else. (Direct work in software engineering, earning to give for global poverty, or going back to school and working to prevent aging if I found myself reluctant to give up the highly ambitious nature of the AI alignment problem)

And I haven't heard of anyone saying GPT-3 can be made into AGI with a bit of tweaking, scaling, and prompt engineering.

I am one who says that (not certain, but high probability), so i thought I will chime in. The main ideas of my belief is that

1. Kaplan paper/chinchilla paper shows the function between resources and cross entropy loss. With high probability I believe that this scaling won't break down significantly, ie. We can get ever closer to the theoretical irreducible entropy with transformer architectures.
2. Cross entropy loss measures the distance between two probability distributions, in this case the distribution of human generated text (encoded with tokens) and the empirical distribution generated by the model. I believe with high probability that this measure is relevant, ie we can only get to a low enough cross entropy loss when the model is capable of doing human comparable intellectual work (irrespective of it actually doing it).
3. After the model achieves the necessary cross entropy loss and consequently becomes capable somewhere in it to produce agi level work (as per 2.), we can get the model to output that level of work with minor tweaks (I don't have specifics, but think on the level of letting the model to recusrively call itself on some generated text with a special output command or some such)

I don't think prompt engineering is relevant to agi.

I would be glad for any information that can help me update.

Thank you--  I love hearing pessimistic takes on this.

The only issue I'd take is I believe most people here are genuinely frightened of AI.  The seductive part I think isn't the excitement of AI, but the excitement of understanding something important that most other people don't seem to grasp.

I felt this during COVID when I realized what was coming before my co-workers etc did.  There is something seductive about having secret knowledge, even if you realize it's kind of gross to feel good about it.

My main hope in terms of AGI being far off is that there's some sort of circle-jerk going on on this website where everyone is basing their opinion on everyone else, but everyone is basing it on everyone else etc etc

I mean obviously the arguments themselves are good and compelling and the true luminaries in the field have good reasons, but take for instance me.  I'm genuinely frightened of AGI and believe there is a ~10% chance my daughter will be killed by it before the end of her natural life, but honestly all of my reasons for worry boil down to "other smart people seem to think this."

Like, I get the arguments for AGI doom.  They make sense.  But the truth is if Eliezer Y came out tomorrow and said "holy shit I was wrong we don't have to worry at all because of the MHR-5554 module theorem" and then Nick Bostrom said "Yup!  Stop worrying everyone.  Thank you MHR-5554!  What a theorem!"  I would instantly stop worrying.

I think (hope?) that many people on this site are in the same boat as me

The only issue I'd take is I believe most people here are genuinely frightened of AI.  The seductive part I think isn't the excitement of AI, but the excitement of understanding something important that most other people don't seem to grasp.

I felt this during COVID when I realized what was coming before my co-workers etc did.  There is something seductive about having secret knowledge, even if you realize it's kind of gross to feel good about it.

Interesting point. Combined with the other poster saying he really would feel dread if a sage told him AGI was coming in 2040, I think I can acknowledge that my wishful thinking frame doesn't capture the full phenomenon. But I would still say it's a major contributing factor. Like I said in the post, I feel a strong pressure to engage in wishful thinking myself, and in my experience any pressure on myself is usually replicated in the people around me.

Regardless of the exact mix of motivations, I think this--

My main hope in terms of AGI being far off is that there's some sort of circle-jerk going on on this website where everyone is basing their opinion on everyone else, but everyone is basing it on everyone else etc etc

is exactly what's going on here.

I'm genuinely frightened of AGI and believe there is a ~10% chance my daughter will be killed by it before the end of her natural life, but honestly all of my reasons for worry boil down to "other smart people seem to think this.

I have a lot of thoughts about when it's valid to trust authorities/experts, and I'm not convinced this is one of those cases. That being said, if you are committed to taking your view on this from experts, then you should consider whether you're really following the bulk of the experts. I remember a thread on here a while back that surveyed a bunch of leaders in ML (engineers at Deepmind maybe?), and they were much more conservative with their AI predictions than most people here. Those survey results track with the vibe I get from the top people in the space.

If you're reading this porby, do you really want to be wrong?

hello this is porby, yes

This made me pace back and forth for about 30 minutes, trying to put words on exactly why I felt an adrenaline spike reading that bit.

I don't think your interpretation of my words (or words similar to mine) is unique, so I decided to write something a bit longer in response.

thanks for this post! I think it is always great when people share their opinions about the timelines and more people(even the ones not directly involved in ML) should be encouraged to freely express their view without the fear that they will be held accountable in the case they are wrong. In my opinion, even the people directly involved in ML research seem to be too reluctant to share their timelines and how they impact their work which might be useful for others. Essentially, I think that people should share their view when it is something that is going to somehow influence their decision making, rather than when they feel it crosses some level of rigour/certainty, therefore posts like this one should receive a bit more praise (and LW should have two types of voting also for posts not just comments).

While I disagree with the overall point of the post, I agree that there is probably a lot of wishful thinking/curiosity driving this forum and impacting some predictions. However, even despite this, I still think that AGI is very close. My prediction is that TAI will happen in the next 2-5 years(70%) and AGI in the next 8 (75%). I guess it will be based on something like scaled-up GATO pre-trained on youtube videos with RL and some memory.  The main reason for this is that deep-learning was operating on a very small scale just two years ago(less than a billion parameters) which made it very difficult to test some ideas. The algorithmic improvements to me seem just too easy to come up with. For example, almost all important problems e.g. language,vision, audio, RL were solved/almost solved in a very short time and the the ideas there didn't require much ingenuity.

Just a slight exaggeration - if you take a five year old and ask him to draw a random diagram, chances are quite high that if scaled up, this is a SOTA architecture for something. It is just hard to test the ideas, because of the engineering difficulty and the lack of compute. However, this is likely to be overcomed soon with either more money being thrown on the problem or architecture improvements - e.g. Cerebras and Graphcore seem to be doing some promising work here.

I think many people on this site over-update on recent progress.  But I also doubt the opposite extreme you're at.

I think it's very unlikely (<10% chance) that we'll see AGI within the next 50 years, and entirely possible (>25% chance) that it will take over 500 years.

Even just evolutionary meta-algorithms would probably have runaway progress by 500 years.  That is, without humans getting super specific, deep math insights.  This is easy to imagine with the enormously higher yearly ASIC hardware fabrication we'd be seeing long before then.  I don't think a 500 year timeframe would take an unexpected math obstacle, it would take a global catastrophe.

I'd give this formulation of AGI a 93% chance of happening by 2522, and 40% by 2072.  If I could manage to submit a post before December, I'd be arguing for the Future Fund prize to update to a later timeline.  But not this much later.

"Psychologizing" is when we go past arguing that a position is in fact wrong, and start considering what could have gone wrong in people's minds to make them believe something so wrong. Realistically, we can't entirely avoid psychologizing - sometimes cognitive biases are real and understanding how they potentially apply is important. Nonetheless, since psychologizing is potentially a pernicious and poisonous activity, Arbital's discussion norms say to explicitly tag or greenlink with "Psychologizing" each place where you speculate about the psychology of how people could possibly be so wrong. Again, psychologizing isn't necessarily wrong - sometimes people do make mistakes for systematic psychological reasons that are understandable - but it's dangerous, potentially degenerates extremely quickly, and deserves to be highlighted each and every time.

If you go off on an extended discourse about why somebody is mentally biased or thinking poorly, before you've argued on the object level that they are in fact mistaken about the subject matter, this is Bulverism.
- Arbital

This may be too late to get an object level response.

But I think there's a critical variable missing from this analysis that pushes timeline much closer.

Criticality.  By criticality I mean it in the sense that as you get a nuclear pile closer to a critical mass, activity rises and as you cross a threshold of gain it rises exponentially.

In the AGI case, we have a clear and obvious feedback mechanism for AGI.

Narrow AIs can propose AI architectures that may scale to AGI, they can help us design the chips they execute on, discover faster the lithography settings to make smaller transistors, translate programming languages to from one slow language to a faster one, write code modules, and later on drive the robots to mass-manufacture compute nodes at colossal scales.

In addition as you begin to approach AGI you can have tasks in your AGI benchmark suite to "design a narrow AI that can design AGIs" (indirect approach" or "design a better AGI that will do well on this bench".