Continuing a yearly tradition, I evaluate AI predictions from past years, and collect a convenience sample of AI predictions made this year. I prefer selecting specific predictions, especially ones made about the near term, enabling faster evaluation.
Evaluated predictions made about 2025 in 2023, 2024, or 2025 mostly overestimate AI capabilities advances, although there's of course a selection effect (people making notable predictions about the near-term are more likely to believe AI will be impressive near-term).
As time goes on, "AGI" becomes a less useful term, so operationalizing predictions is especially important. In terms of predictions made in 2025, there is a significant cluster of people predicting very large AI effects by 2030. Observations in the coming years will disambiguate.
Predictions about 2025
2023
Jessica Taylor: "Wouldn't be surprised if this exact prompt got solved, but probably something nearby that's easy for humans won't be solved?"
The prompt: "Find a sequence of words that is: - 20 words long - contains exactly 2 repetitions of the same word twice in a row - contains exactly 2 repetitions of the same word thrice in a row"
Self-evaluation: False; I underestimated LLM progress, especially from reasoning models.
2024
teortaxesTex: "We can have effectively o3 level models fitting into 256 Gb VRAM by Q3 2025, running at >40 t/s. Basically it’s a matter of Liang and co. having the compute and the political will to train and upload r3 on Huggingface."
Evaluation: False, but close. DeepSeek V3.1 scores worse than o3 according to Artificial Analysis. DeepSeek V3.2 scores similarly but was Q4 2025.
Jack Gallagher: "calling it now - there's enough different promising candidates rn that I bet by this time [Oct 30] next year we mostly don't use Adam anymore."
Evaluation: Partially correct. Muon is popular, and was used for Kimi K2 and GLM 4.5. Self-evaluated as: “more mixed than I expected. In particular I was expecting more algorithmic iteration on muon.”
Elon Musk: "AI will probably be smarter than any single human next year.”
Evaluation: Mostly false, though jagged capabilities make evaluation difficult.
Aidan McLau: "i think it’s likely (p=.6) that an o-series model solves a millennium prize math problem in 2025"
Evaluation: False
Victor Taelin: "I'm now willing to bet up to 100k (but no more than that, I'm not Musk lol) that HOC will have AGI by end of 2025.... AGI defined as an algorithm capable of proving theorems in a proof assistant as competently as myself. (This is an objective way to say 'codes like Taelin'.)"
Evaluation: False
Predictions made in 2025 about 2025
Gary Marcus: “No single system will solve more than 4 of the AI 2027 Marcus-Brundage tasks by the end of 2025. I wouldn’t be shocked if none were reliably solved by the end of the year.”
Evaluation: Correct. AI can perhaps pass the reading comprehension task. But not any 4 of the tasks.
Dario Amodei: “In 3 to 6 months… AI is writing 90 percent of the code.”
Evaluation (6 months being Sep 2025): False in the relevant sense. (“Number of lines”, for example, is not a relevant metric.)
@kimmonismus: “Give it 6 more months so that [Manus is] faster, more reliable and more intelligent and it will replace 50% of all white collar jobs.”
Evaluation: False
Milus Brundage: “‘When will we get really dangerous AI capabilities that could cause a very serious incident (billions in damage / hundreds+ of people dead)?’ Unfortunately, the answer seems to be this year, from what I can tell.”
Evaluation: Likely false. No strong indication that this is true.
Testingthewaters: “I believe that within 6 months this line of research [online in-sequence learning] will produce a small natural-language capable model that will perform at the level of a model like GPT-4, but with improved persistence and effectively no “context limit” since it is constantly learning and updating weights.”
Evaluation: False
@chatgpt21: “75% on humanity’s last exam by the end of the year.”
Evaluation: False (Gemini 3 Pro has a high score of 37.2%)
Predictions made in 2025
2026
Mark Zuckerberg: “We're working on a number of coding agents inside Meta... I would guess that sometime in the next 12 to 18 months, we'll reach the point where most of the code that's going toward these efforts is written by AI. And I don't mean autocomplete.”
Bindu Reddy: “true AGI that will automate work is at least 18 months away.”
Elon Musk: “I think we are quite close to digital superintelligence. It may happen this year. If it doesn't happen this year, next year for sure. A digital superintelligence defined as smarter than any human at anything.”
Emad Mostaque: “For any job that you can do on the other side of a screen, an AI will probably be able to do it better, faster, and cheaper by next year.”
David Patterson: “There is zero chance we won't reach AGI by the end of next year. My definition of AGI is the human-to-AI transition point - AI capable of doing all jobs.”
Eric Schmidt: “It’s likely in my opinion that you’re gonna see world-class mathematicians emerge in the next one year that are AI based, and world-class programmers that’re gonna appear within the next one or two years”
Julian Schrittwieser: “Models will be able to autonomously work for full days (8 working hours) by mid-2026.”
Mustafa Suleyman: “it can take actions over infinitely long time horizons… that capability alone is breathtaking… we basically have that by the end of next year.”
Vector Taelin: “AGI is coming in 2026, more likely than not”
François Chollet: “2026 [when the AI bubble bursts]? What cannot go on forever eventually stops.”
Peter Wildeford: “Currently the world doesn’t have any operational 1GW+ data centers. However, it is very likely we will see fully operational 1GW data centers before mid-2026.”
Will Brown: “registering a prediction that by this time next year, there will be at least 5 serious players in the west releasing great open models”
Davidad: “I would guess that by December 2026 the RSI loop on algorithms will probably be closed”
Teortaxes: “I predict that on Spring Festival Gala (Feb 16 2026) or ≤1 week of that we will see at least one Chinese company credibly show off with hundreds of robots.”
Gary Marcus: “Human domestic robots like Optimus and Figure will be all demo and very little product.”
2027
Anthropic: “we expect powerful AI systems will emerge in late 2026 or early 2027… Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines… The ability to navigate all interfaces available to a human doing digital work today… The ability to autonomously reason through complex tasks over extended periods—hours, days, or even weeks… The ability to interface with the physical world”
Anthony Aguirre: “Humanity has got about a year or two left to decide whether we're going to replace ourselves with machines – starting individually, then as a species.”
Kevin Roose: “I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do… the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.”
Daniel Jeffries: “AI will not be doing any job a human can do in the next year or two years.”
David Shapiro: “The curve is steepening. ASI by 2026 or 2027 confirmed.”
Taylor G. Lunt: “AI will not substantially speed up software development projects [by end of 2027]. For example, the AI 2027 prediction that 2025-quality games will be made in a single month by July 2027 is false.”
Paul Schrader: “I think we’re only two years away from the first AI feature [film].”
Miles Brundage: “very roughly, something pretty clearly superhuman in most respects by end of 2027 + also very big stuff before then”
2028
AI 2027, original: “We forecast when the leading AGI company will internally develop a superhuman coder (SC): an AI system that can do any coding tasks that the best AGI company engineer does, while being much faster and cheaper” (2028 as a rough median for April 2025 numbers).
Shane Legg: “Of course this now means a 50% chance of AGI in the next 3 years!”
Dario Amodei: “at some point we’re going to get to AI systems that are better than almost all humans at almost all tasks… a country of geniuses in a datacenter… we’re quite likely to get in the next 2-3 years”
Andrew Critch: “By year, I’d say… p(AGI by eoy 2028) = 75%... AGI = AI that at runtime, for cheaper than a human, can replace the human in the power-weighted majority of human jobs.” (He gives other probabilities for different years.)
80000 hours: “extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.”
Sholto Douglas: “we’re near guaranteed at this point to have, effectively, models that are capable of automating any white-collar job by 2027-2028, near guaranteed end of decade… we need to make sure we pull in the feedback loops with the real world”
Dwarkesh Patel: “AI can do taxes end-to-end for my small business as well as a competent general manager could in a week: including chasing down all the receipts on different websites, finding all the missing pieces, emailing back and forth with anyone we need to hassle for invoices, filling out the form, and sending it to the IRS: 2028 [median]”
Nikola Jurkovic: “More concretely, my median is that AI research will be automated by the end of 2028, and AI will be better than humans at >95% of current intellectual labor by the end of 2029.”
Ryan Greenblatt: “I expect doubling times of around 170 days on METR’s task suite (or similar tasks) over the next 2 years or so which implies we’ll be hitting 2 week 50% reliability horizon lengths around the start of 2028.”
OpenAI: “In 2026, we expect AI to be capable of making very small discoveries. In 2028 and beyond, we are pretty confident we will have systems that can make more significant discoveries (though we could of course be wrong, this is what our research progress appears to indicate).”
2029
Sam Altman: “I think AGI will probably get developed during this president’s term”
METR: “If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects.”
Matthew Barnett: “I don't expect the US labor force participation rate to fall by more than 50% from its current level by 2029, or for US GDP growth to surpass 15% for any year in 2025-2029. Yet I do expect AI 2027 will look ~correct based on vibes until around 2029.”
2030
Demis Hassabis: “as a benchmark for agi… the ability for these systems to invent their own hypotheses or conjuctures about science, not just prove existing ones… come up with a new Riemann hypothesis… relativity back in the days that Einstein did it with the information he had… I would say probably like 3-5 years away”
Eliezer Yudkowsky: “Five whole years [until the end of humanity]? Wow, that's a lot of time. Way out of line with industry estimates.” (he clarifies it’s partially tongue-in-cheek, but his follow-up suggests he would invest more in the long term if he thought ASI was 5 years away)
@finbarrtimbers: “The bull case for robotics startups is that, if we really are within 2-5 years from AGI (which I believe), then the real world will be the next bottleneck, and starting a robotics company now is clearly the right choice.”
Daniel Faggella: “I'm nearly completely certain that in under 5 years the entire world will be rattled to its foundation, and our attenuation is eminent”
Thane Ruthenis: “I expect AGI Labs' AGI timelines have ~nothing to do with what will actually happen. On average, we likely have more time than the AGI labs say. Pretty likely that we have until 2030, maybe well into 2030s. By default, we likely don't have much longer than that.”
Tamay Besiroglu: “I bet @littmath that in 5 years AI will be able to produce Annals-quality Number Theory papers at an inference budget at or below $100k/paper, at 3:1 odds in my favor.”
Scott Alexander: “I think AI will be able to replace >50% of humans within 5 years.”
@slow_developer: “as someone working in the [AI] industry, i expect my job to be fully done by AI before 2030”
@alz_zyd_: “Prediction: AI will revolutionize pure math, essentially dominating human mathematicians within 5 years, but this won't move the needle on technological progress, because the vast majority of modern pure math is useless for any practical application”
Sam Altman: “I think we’re gonna maintain the same rate of progress, rate of improvement in these models for the second half of the decade as we did for the first… these systems will be capable of remarkable new stuff: novel scientific discovery, running extremely complex functions throughout society”
Epoch AI: “We forecast that by 2030: Training clusters would cost hundreds of billions of dollars”
Gordon Worley: “I put about 70% odds on us failing to solve steering in the next 5 years and thus being unable to build AGI.”
David Krueger: “But the real AI is coming. I think we’ve got about 5 years… I think it’s going to lead to human extinction, probably within a few years of its development. At best, it will cause near-total unemployment.”
Forecasting Research Institute: “LEAP experts forecast major effects of AI by 2030, including: ⚡ 7x increase in AI’s share of U.S. electricity use (1% -> 7%) 🖥️ 9x increase in AI-assisted work hours (2% -> 18%)”
Guive Assadi: “In order to get credit (or blame) for this prediction, I'll say that I think there is a <30% chance that unemployment will be above 10% for any six month span in the United States over the next five years.”
Michael Druggan: “After some back and forth to agree on terms @ludwigABAP and I have decided that if on Dec 31 2030 @littmath says that AI has not yet generated any interesting math research I will pay Ludwig $1000 and if he says that it has Ludwig will pay me $1000.”
2031
Max Harms: “The authors [of AI 2027] predict a strong chance that all humans will be (effectively) dead in 6 years, and this agrees with my best guess about the future.”
Zvi: “If you put a gun to my head for a typical AGI definition I’d pick 2031 [median], but with no ‘right to be surprised’ if it showed up in 2028 or didn’t show up for a while.”
Eli Lifland: “Pictured is the model trajectory with all parameters set to my median values. [automated coder 5/31; superhuman AI researcher 3/32; ASI 7/34]”
2032
Nathan Lambert: “I think automating the “AI Research Engineer (RE)” is doable in the 3-7 year range — meaning the person that takes a research idea, implements it, and compares it against existing baselines is entirely an AI that the “scientists” will interface with.”
2035
Forethought: “a century of technological progress in a decade — or far more — is more likely than not.”
Roko Mijic: “There's probably going to be an intelligence explosion in the next decade and it's going to get very messy.”
Demis Hassabis: “maybe we can cure all disease with the help of AI… within the next decade or so”
Ege Erdil: “I still think full automation of remote work in 10 years is plausible, because it’s what we would predict if we straightforwardly extrapolate current rates of revenue growth and assume no slowdown. However, I would only give this outcome around 30% chance.”
AI as Normal Technology: “we think that transformative economic and societal impacts will be slow (on the timescale of decades)”
Key Tryer: “I'm going to bet that a top consumer GPU in 2035 would be able to train a GPT-5 level system in a few days, and that containing all that data will be possible on a few consumer storage drives.”
Richard Sutton: “large language models… will not be representative of the leading edge of AI for more than a decade”
Bob McGrew: “the fundamental concepts… the idea of language models with transformers, the idea of scaling the pre-training on those language models… and then the idea of reasoning… more and more multimodal capabilities… in 2035 we’re not gonna see any new trends beyond those”
Dean Ball: “But suppose you also believe that there could be future AI systems with qualitatively different capabilities and risks, even if they may involve LLMs or resemble them in some ways. These future systems would not just be “smarter,” they would also be designed to excel at cognitive tasks where current LLMs fall… My own guess is that they will be built for the first time sometime between 2029 and 2035”
Andrej Karpathy: “I feel like the problems [with building AGI] are tractable, they’re surmountable, but they’re still difficult. If I just average it out, it just feels like a decade to me.”
Daniel Kokotajlo: “The companies seem to think strong AGI is just a few years away, and while I'm not as bullish as they are, I do expect it to happen in the next 5-10 years.”
2039
Ray Kurzweil: “In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud.”
2040
Liron: “Probably another 1-15 years [until FOOM].”
2045
Ilya Sutskever: “Five to twenty [years until AI can learn as well as a human]”
Andrew Ng: “Modern AI is a general purpose technology that is enabling many applications, but AI that can do any intellectual tasks that a human can (a popular definition for AGI) is still decades away or longer.”
2050
Steven Byrnes: “I don’t know when the next paradigm will arrive, and nobody else does either. I tend to say things like “probably 5 to 25 years”. But who knows!”
Past years: 2023 2024
Continuing a yearly tradition, I evaluate AI predictions from past years, and collect a convenience sample of AI predictions made this year. I prefer selecting specific predictions, especially ones made about the near term, enabling faster evaluation.
Evaluated predictions made about 2025 in 2023, 2024, or 2025 mostly overestimate AI capabilities advances, although there's of course a selection effect (people making notable predictions about the near-term are more likely to believe AI will be impressive near-term).
As time goes on, "AGI" becomes a less useful term, so operationalizing predictions is especially important. In terms of predictions made in 2025, there is a significant cluster of people predicting very large AI effects by 2030. Observations in the coming years will disambiguate.
Predictions about 2025
2023
Jessica Taylor: "Wouldn't be surprised if this exact prompt got solved, but probably something nearby that's easy for humans won't be solved?"
The prompt: "Find a sequence of words that is: - 20 words long - contains exactly 2 repetitions of the same word twice in a row - contains exactly 2 repetitions of the same word thrice in a row"
Self-evaluation: False; I underestimated LLM progress, especially from reasoning models.
2024
teortaxesTex: "We can have effectively o3 level models fitting into 256 Gb VRAM by Q3 2025, running at >40 t/s. Basically it’s a matter of Liang and co. having the compute and the political will to train and upload r3 on Huggingface."
Evaluation: False, but close. DeepSeek V3.1 scores worse than o3 according to Artificial Analysis. DeepSeek V3.2 scores similarly but was Q4 2025.
Jack Gallagher: "calling it now - there's enough different promising candidates rn that I bet by this time [Oct 30] next year we mostly don't use Adam anymore."
Evaluation: Partially correct. Muon is popular, and was used for Kimi K2 and GLM 4.5. Self-evaluated as: “more mixed than I expected. In particular I was expecting more algorithmic iteration on muon.”
Elon Musk: "AI will probably be smarter than any single human next year.”
Evaluation: Mostly false, though jagged capabilities make evaluation difficult.
Aidan McLau: "i think it’s likely (p=.6) that an o-series model solves a millennium prize math problem in 2025"
Evaluation: False
Victor Taelin: "I'm now willing to bet up to 100k (but no more than that, I'm not Musk lol) that HOC will have AGI by end of 2025.... AGI defined as an algorithm capable of proving theorems in a proof assistant as competently as myself. (This is an objective way to say 'codes like Taelin'.)"
Evaluation: False
Predictions made in 2025 about 2025
Gary Marcus: “No single system will solve more than 4 of the AI 2027 Marcus-Brundage tasks by the end of 2025. I wouldn’t be shocked if none were reliably solved by the end of the year.”
Evaluation: Correct. AI can perhaps pass the reading comprehension task. But not any 4 of the tasks.
Dario Amodei: “In 3 to 6 months… AI is writing 90 percent of the code.”
Evaluation (6 months being Sep 2025): False in the relevant sense. (“Number of lines”, for example, is not a relevant metric.)
@kimmonismus: “Give it 6 more months so that [Manus is] faster, more reliable and more intelligent and it will replace 50% of all white collar jobs.”
Evaluation: False
Milus Brundage: “‘When will we get really dangerous AI capabilities that could cause a very serious incident (billions in damage / hundreds+ of people dead)?’ Unfortunately, the answer seems to be this year, from what I can tell.”
Evaluation: Likely false. No strong indication that this is true.
Testingthewaters: “I believe that within 6 months this line of research [online in-sequence learning] will produce a small natural-language capable model that will perform at the level of a model like GPT-4, but with improved persistence and effectively no “context limit” since it is constantly learning and updating weights.”
Evaluation: False
@chatgpt21: “75% on humanity’s last exam by the end of the year.”
Evaluation: False (Gemini 3 Pro has a high score of 37.2%)
Predictions made in 2025
2026
Mark Zuckerberg: “We're working on a number of coding agents inside Meta... I would guess that sometime in the next 12 to 18 months, we'll reach the point where most of the code that's going toward these efforts is written by AI. And I don't mean autocomplete.”
Bindu Reddy: “true AGI that will automate work is at least 18 months away.”
Elon Musk: “I think we are quite close to digital superintelligence. It may happen this year. If it doesn't happen this year, next year for sure. A digital superintelligence defined as smarter than any human at anything.”
Emad Mostaque: “For any job that you can do on the other side of a screen, an AI will probably be able to do it better, faster, and cheaper by next year.”
David Patterson: “There is zero chance we won't reach AGI by the end of next year. My definition of AGI is the human-to-AI transition point - AI capable of doing all jobs.”
Eric Schmidt: “It’s likely in my opinion that you’re gonna see world-class mathematicians emerge in the next one year that are AI based, and world-class programmers that’re gonna appear within the next one or two years”
Julian Schrittwieser: “Models will be able to autonomously work for full days (8 working hours) by mid-2026.”
Mustafa Suleyman: “it can take actions over infinitely long time horizons… that capability alone is breathtaking… we basically have that by the end of next year.”
Vector Taelin: “AGI is coming in 2026, more likely than not”
François Chollet: “2026 [when the AI bubble bursts]? What cannot go on forever eventually stops.”
Peter Wildeford: “Currently the world doesn’t have any operational 1GW+ data centers. However, it is very likely we will see fully operational 1GW data centers before mid-2026.”
Will Brown: “registering a prediction that by this time next year, there will be at least 5 serious players in the west releasing great open models”
Davidad: “I would guess that by December 2026 the RSI loop on algorithms will probably be closed”
Teortaxes: “I predict that on Spring Festival Gala (Feb 16 2026) or ≤1 week of that we will see at least one Chinese company credibly show off with hundreds of robots.”
Ben Hoffman: “By EoY 2026 I don’t expect this to be a solved problem, though I expect people to find workarounds that involve lowered standards: https://benjaminrosshoffman.com/llms-for-language-learning/” (post describes possible uses of LLMs for language learning)
Gary Marcus: “Human domestic robots like Optimus and Figure will be all demo and very little product.”
2027
Anthropic: “we expect powerful AI systems will emerge in late 2026 or early 2027… Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines… The ability to navigate all interfaces available to a human doing digital work today… The ability to autonomously reason through complex tasks over extended periods—hours, days, or even weeks… The ability to interface with the physical world”
Anthony Aguirre: “Humanity has got about a year or two left to decide whether we're going to replace ourselves with machines – starting individually, then as a species.”
Kevin Roose: “I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do… the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.”
Daniel Jeffries: “AI will not be doing any job a human can do in the next year or two years.”
David Shapiro: “The curve is steepening. ASI by 2026 or 2027 confirmed.”
Taylor G. Lunt: “AI will not substantially speed up software development projects [by end of 2027]. For example, the AI 2027 prediction that 2025-quality games will be made in a single month by July 2027 is false.”
Paul Schrader: “I think we’re only two years away from the first AI feature [film].”
Miles Brundage: “very roughly, something pretty clearly superhuman in most respects by end of 2027 + also very big stuff before then”
2028
AI 2027, original: “We forecast when the leading AGI company will internally develop a superhuman coder (SC): an AI system that can do any coding tasks that the best AGI company engineer does, while being much faster and cheaper” (2028 as a rough median for April 2025 numbers).
Shane Legg: “Of course this now means a 50% chance of AGI in the next 3 years!”
Dario Amodei: “at some point we’re going to get to AI systems that are better than almost all humans at almost all tasks… a country of geniuses in a datacenter… we’re quite likely to get in the next 2-3 years”
Andrew Critch: “By year, I’d say… p(AGI by eoy 2028) = 75%... AGI = AI that at runtime, for cheaper than a human, can replace the human in the power-weighted majority of human jobs.” (He gives other probabilities for different years.)
80000 hours: “extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.”
Sholto Douglas: “we’re near guaranteed at this point to have, effectively, models that are capable of automating any white-collar job by 2027-2028, near guaranteed end of decade… we need to make sure we pull in the feedback loops with the real world”
Dwarkesh Patel: “AI can do taxes end-to-end for my small business as well as a competent general manager could in a week: including chasing down all the receipts on different websites, finding all the missing pieces, emailing back and forth with anyone we need to hassle for invoices, filling out the form, and sending it to the IRS: 2028 [median]”
Nikola Jurkovic: “More concretely, my median is that AI research will be automated by the end of 2028, and AI will be better than humans at >95% of current intellectual labor by the end of 2029.”
Ryan Greenblatt: “I expect doubling times of around 170 days on METR’s task suite (or similar tasks) over the next 2 years or so which implies we’ll be hitting 2 week 50% reliability horizon lengths around the start of 2028.”
OpenAI: “In 2026, we expect AI to be capable of making very small discoveries. In 2028 and beyond, we are pretty confident we will have systems that can make more significant discoveries (though we could of course be wrong, this is what our research progress appears to indicate).”
2029
Sam Altman: “I think AGI will probably get developed during this president’s term”
METR: “If the trend of the past 6 years continues to the end of this decade, frontier AI systems will be capable of autonomously carrying out month-long projects.”
Matthew Barnett: “I don't expect the US labor force participation rate to fall by more than 50% from its current level by 2029, or for US GDP growth to surpass 15% for any year in 2025-2029. Yet I do expect AI 2027 will look ~correct based on vibes until around 2029.”
2030
Demis Hassabis: “as a benchmark for agi… the ability for these systems to invent their own hypotheses or conjuctures about science, not just prove existing ones… come up with a new Riemann hypothesis… relativity back in the days that Einstein did it with the information he had… I would say probably like 3-5 years away”
Eliezer Yudkowsky: “Five whole years [until the end of humanity]? Wow, that's a lot of time. Way out of line with industry estimates.” (he clarifies it’s partially tongue-in-cheek, but his follow-up suggests he would invest more in the long term if he thought ASI was 5 years away)
@finbarrtimbers: “The bull case for robotics startups is that, if we really are within 2-5 years from AGI (which I believe), then the real world will be the next bottleneck, and starting a robotics company now is clearly the right choice.”
Daniel Faggella: “I'm nearly completely certain that in under 5 years the entire world will be rattled to its foundation, and our attenuation is eminent”
Thane Ruthenis: “I expect AGI Labs' AGI timelines have ~nothing to do with what will actually happen. On average, we likely have more time than the AGI labs say. Pretty likely that we have until 2030, maybe well into 2030s. By default, we likely don't have much longer than that.”
Tamay Besiroglu: “I bet @littmath that in 5 years AI will be able to produce Annals-quality Number Theory papers at an inference budget at or below $100k/paper, at 3:1 odds in my favor.”
Scott Alexander: “I think AI will be able to replace >50% of humans within 5 years.”
@slow_developer: “as someone working in the [AI] industry, i expect my job to be fully done by AI before 2030”
@alz_zyd_: “Prediction: AI will revolutionize pure math, essentially dominating human mathematicians within 5 years, but this won't move the needle on technological progress, because the vast majority of modern pure math is useless for any practical application”
Sam Altman: “I think we’re gonna maintain the same rate of progress, rate of improvement in these models for the second half of the decade as we did for the first… these systems will be capable of remarkable new stuff: novel scientific discovery, running extremely complex functions throughout society”
Epoch AI: “We forecast that by 2030: Training clusters would cost hundreds of billions of dollars”
Gordon Worley: “I put about 70% odds on us failing to solve steering in the next 5 years and thus being unable to build AGI.”
David Krueger: “But the real AI is coming. I think we’ve got about 5 years… I think it’s going to lead to human extinction, probably within a few years of its development. At best, it will cause near-total unemployment.”
Forecasting Research Institute: “LEAP experts forecast major effects of AI by 2030, including: ⚡ 7x increase in AI’s share of U.S. electricity use (1% -> 7%) 🖥️ 9x increase in AI-assisted work hours (2% -> 18%)”
Guive Assadi: “In order to get credit (or blame) for this prediction, I'll say that I think there is a <30% chance that unemployment will be above 10% for any six month span in the United States over the next five years.”
Michael Druggan: “After some back and forth to agree on terms @ludwigABAP and I have decided that if on Dec 31 2030 @littmath says that AI has not yet generated any interesting math research I will pay Ludwig $1000 and if he says that it has Ludwig will pay me $1000.”
2031
Max Harms: “The authors [of AI 2027] predict a strong chance that all humans will be (effectively) dead in 6 years, and this agrees with my best guess about the future.”
Zvi: “If you put a gun to my head for a typical AGI definition I’d pick 2031 [median], but with no ‘right to be surprised’ if it showed up in 2028 or didn’t show up for a while.”
Eli Lifland: “Pictured is the model trajectory with all parameters set to my median values. [automated coder 5/31; superhuman AI researcher 3/32; ASI 7/34]”
2032
Nathan Lambert: “I think automating the “AI Research Engineer (RE)” is doable in the 3-7 year range — meaning the person that takes a research idea, implements it, and compares it against existing baselines is entirely an AI that the “scientists” will interface with.”
2035
Forethought: “a century of technological progress in a decade — or far more — is more likely than not.”
Roko Mijic: “There's probably going to be an intelligence explosion in the next decade and it's going to get very messy.”
Demis Hassabis: “maybe we can cure all disease with the help of AI… within the next decade or so”
Ege Erdil: “I still think full automation of remote work in 10 years is plausible, because it’s what we would predict if we straightforwardly extrapolate current rates of revenue growth and assume no slowdown. However, I would only give this outcome around 30% chance.”
AI as Normal Technology: “we think that transformative economic and societal impacts will be slow (on the timescale of decades)”
Key Tryer: “I'm going to bet that a top consumer GPU in 2035 would be able to train a GPT-5 level system in a few days, and that containing all that data will be possible on a few consumer storage drives.”
Richard Sutton: “large language models… will not be representative of the leading edge of AI for more than a decade”
Bob McGrew: “the fundamental concepts… the idea of language models with transformers, the idea of scaling the pre-training on those language models… and then the idea of reasoning… more and more multimodal capabilities… in 2035 we’re not gonna see any new trends beyond those”
Dean Ball: “But suppose you also believe that there could be future AI systems with qualitatively different capabilities and risks, even if they may involve LLMs or resemble them in some ways. These future systems would not just be “smarter,” they would also be designed to excel at cognitive tasks where current LLMs fall… My own guess is that they will be built for the first time sometime between 2029 and 2035”
Andrej Karpathy: “I feel like the problems [with building AGI] are tractable, they’re surmountable, but they’re still difficult. If I just average it out, it just feels like a decade to me.”
Daniel Kokotajlo: “The companies seem to think strong AGI is just a few years away, and while I'm not as bullish as they are, I do expect it to happen in the next 5-10 years.”
2039
Ray Kurzweil: “In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud.”
2040
Liron: “Probably another 1-15 years [until FOOM].”
2045
Ilya Sutskever: “Five to twenty [years until AI can learn as well as a human]”
Andrew Ng: “Modern AI is a general purpose technology that is enabling many applications, but AI that can do any intellectual tasks that a human can (a popular definition for AGI) is still decades away or longer.”
2050
Steven Byrnes: “I don’t know when the next paradigm will arrive, and nobody else does either. I tend to say things like “probably 5 to 25 years”. But who knows!”