LESSWRONG
LW

AI

6

My AI Vibes are Shifting

by Nathan Young
5th Sep 2025
5 min read
5

6

AI

6

My AI Vibes are Shifting
7J Bostock
4AnthonyC
1StanislavKrym
3Nathan Young
1StanislavKrym
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:08 PM
[-]J Bostock1h72

I would be interested to know how you think things are going to go in the 95-99% of non-doom worlds. Do you expect AI to look like "ChatGPT but bigger, broader, and better" in the sense of being mostly abstracted and boxed away into individual usage cases/situations? Do you expect AIs to be ~100% in command but just basically aligned and helpful?

Reply
[-]AnthonyC21m40

AI infrastructure seems really expensive. I need to actually do the math here (and I haven’t! hence this is uncertain) but do we really expect growth on trend given the cost of this buildout in both chips and energy? Can someone really careful please look at this?

This is not a really careful look, but: The world has managed extremely fast (well, trains and highways fast, not FOOM-fast) large-scale transformations of the planet before. Mostly this requires that 1) the cost is worth the benefit to those spending and 2) we get out of our own way and let it happen. I don't think money or fundamental feasibility will be the limiters here.

Also, consider that training is now, or is becoming, a minority of compute. More and more is going towards inference - aka that which generates revenue. If building inference compute is profitable and becoming more profitable, then it doesn't really matter how little of the value is captured by the likes of OpenAI. It's worth building, so it'll get built. And some of it will go towards training and research, in ever-increasing absolute amounts.

Even if many of the companies building data centers die out because of a slump of some kind, the data centers themselves, and the energy to power them, will still exist. Plausibly the second buyers then get the infrastructural benefits at a much lower price - kinda like the fiber optic buildout of the 1990s and early 2000s. AKA "AI slump wipes out the leaders" might mean "all of a sudden there's huge amounts of compute available at much lower cost."

Reply
[-]StanislavKrym1h10

do we really expect growth on trend given the cost of this buildout in both chips and energy?

What I expect is another series of algorithmic breakthroughs (e.g. neuralese) which rapidly increases the AIs' capabilities if not outright FOOMs them into the ASI. These breakthroughs would likely make mankind obsolete.

Reply
[-]Nathan Young1h30

When do you expect this to happen by?

Reply
[-]StanislavKrym8m10

I don't know. As I discussed with Kokotajlo, he recently claimed that "we should have some credence on new breakthroughs e.g. neuralese, online learning, whatever. Maybe like 8%/yr?", but I doubt that it will be 8%/year. Denote the probability that the breakthrough wasn't discovered as of time t by P(t). Then one of the models is dP/dt=−PNc, where N is the effective progress rate. This rate is likely proportional to the amount of researchers hired and to progress multipliers, since new architectures and training methods can be cheaply tested (e.g. on GPT-2 or GPT-3), but need the ideas and coding.

The number of researchers and coders was estimated in the AI-2027 security forecast to increase exponentially until the intelligence explosion (which the scenario's authors assumed to start in March 2027 with superhuman coders). What I don't understand how to estimate is the constant c which symbolises the difficulty[1] of discovering the breakthrough. If, say, c was 200 per million of human-years, then 5K human years would likely be enough and the explosion would likely start in 3 years. Hell, if c was 8%/yr in a company with 1K humans, then the company would need to have 12.5K human-years, shifting the timelines to at most 5-6 years from Dec 2024... 

EDIT: Kokotajlo promised to write a blog post with a detailed explanation of the models.

  1. ^

    The worse-case scenario is that diffusion models are already a breakthrough.

Reply
Moderation Log
More from Nathan Young
View more
Curated and popular this week
5Comments

I think vibes-wise I am a bit less worried about AI than I was a couple of years ago. Perhaps (vibewise) P(doom) 5% to like 1%.[1]

Happy to discuss in the comments. I maybe very wrong. I wrote this up in about 30 minutes.

Note I still think that AI is probably a very serious issue, but one to focus on and understand rather than to necessarily push for slowing in the next 2 years. I find this very hard to predict, so am not making strong claims.

My current model has two kind of AI risk:

  • AI risk to any civilisation roughly like ours
  • Our specific AI risk given where we we were in 2015 and are right now.

Perhaps civilisations almost always end up on paths they strongly don’t endorse due to AI. Perhaps AI risk is vastly overrated. That would be a consideration in the first bucket. Yudkowskian arguments feel more over here.

Perhaps we are making the situation much worse (or better) by actions in the last 5 and next 3 years. That would be the second bucket. It seems much less important that the first, unless the first is like 50/50.

Civilisational AI risk considerations and their direction (in some rough order of importance):

  • More risk. I find it credible that this is a game we only get to play once (over some time period). AGI might lock in a bad outcome. The time period seems important here. I think if that time period is 1000 years, that’s probably fine - if we screw up 1000 years of AI development, with many choices, that sort of feels like our fault. If the time period is 10 years, then that feels like we should stop building AI now. I don’t think we are competent enough to manage 10 years the first time.
  • Unclear risk. The median “do AI right” time period seems about 6 years. A while back Rob and I combined a set of different timelines for AI. Currently the median is 2030 and the 90th percentile is 2045. If we say the ChatGPT launch was the beginning of the AI launch in ~2024 then about half the probability mass will have been in 6 years. That’s longer than 3 years, but shorter than I’d like. On the other hand 20 years to do this well seems better. I hope it’s at the longer end.
  • More risk. We live at a geopolitically more unstable time than the last 20 years. China can credibly challenge US hegemony in a way it couldn’t previously. AI development would have, I predict, been a clearly more US/European thing 20 years ago. That seems better, since there seem some things that western companies obviously won’t do[2] - see the viciousness of tiktok’s algorithm.
  • Less risk. Humans are naturally (and understandably) apocalypticist as a species. Many times we have thought that we are likely to see the end of humanic and almost all of those times our reasoning has been specious. In particular, we see problems but not solutions.
  • Uncertain risk. AI tools were trained on text, which seems to align them far more to human desires than one might expect. Compare this to training them primarily on the Civilisation games. What would that AI be like. That said, it now seems like if you even slightly misalign them during training, i.e., training them on making errors rather than making clean code, they can be misaligned in many other ways too. Perhaps giving them a strong sense of alignment also gives them a strong sense of misalignment. Likewise, future tools may not be trained on text; they might be primarily trained in RL environments.

 

More local considerations and their direction (in some rough order of importance):

  • Less risk. AI is progressing fast, but there is still a huge amount of ground to cover. Median AGI timeline vibes seem to be moving backwards. This increases the chance of a substantial time for regulation while AI grows. It decreases the chance that AI will just be 50% of the economy before governance gets its shoes on.
  • Uncertain risk. AI infrastructure seems really expensive. I need to actually do the math here (and I haven’t! hence this is uncertain) but do we really expect growth on trend given the cost of this buildout in both chips and energy? Can someone really careful please look at this?
  • Less risk. AI revenue seems more spread. While OpenAI and Anthropic are becoming far more valuable, it looks less likely that they will be $100T companies to the exclusion of everything else. Google, Meta, Tesla, US Government, Chinese Government, DeepSeek, Perplexity, Tiktok, Cloudflare, Atlassian, Thinking Machines. The more companies there are that can credibly exist and takes shares of the pie at the same time, the more optimistic I am about governance sitting between them and ensuring a single actor doesn't deploy world destroying AGI. Companies are already very powerful, but why doesn’t Tesla have an army? Why doesn’t Apple own ~any land it controls the labour laws of? The East India Company has fallen and no company has ever taken its crown[3]. Governance is very powerful.
  • Less risk. AI revenues more spread part 2. The more revenue is likely to spread between multiple companies, the harder it is to justify extremely high expenditure on data centers which will be required to train even more powerful models. I think someone argued that this will make OpenAI/Anthropic focus on ever greater training runs so they can find the thing that they can take a large share of, but I think this has to be a negative update in terms of risk—they could have far more revenue right now!
  • Less risk. The public is coming to understand and dislike AI. Whether it's artists, teachers, people who don’t like tech bros. Many powerful forces are coming to find AI distasteful. I think these people will (often for bad reasons) jump on any reason to slow or stymie AI. The AI moratorium was blocked. People cheer when Tesla gets large fines. I don’t think AI is gonna be popular with the median American (thought the median Chinese person, perhaps??).
  • More risk. Specific geopolitical conflict[4]. If China seems likely to pull ahead geopolitically, the US may pull out all the stops. That might involve injecting huge amounts of capital and building into AI. Currently China doesn’t seem to want to race, but they are much better at building energy and trying to get their own chip production. Let’s see.

What do you think I am wrong about here? What considerations am I missing? What should I focus more attention on?

 

  1. ^

    I guess I am building up to some kind of more robust calculation, but this is kind of the information/provocation phase.

  2. ^

    You might argue that China seems not to want to race or put AI in charge of key processes, and I’d agree. But given we would have had the West regardless, this seems to make things less worse than they could have been, rather than better.

  3. ^

    Did FTX try? Like what was the Bahamas like in 10 years in the FTX success world?

  4. ^

    I may be double counting here but there feels like something different about the general geopolitical instability and specifically how US/China might react.