LESSWRONG
LW

654
Vladimir_Nesov
34967Ω5074697851508
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Vladimir_Nesov's Shortform
Ω
1y
Ω
136
Trends in Economic Inputs to AI
Vladimir_Nesov18h60

The LLM adoption wave is underway, which means current AI companies are receiving valuations at very high multiples relative to ARR, and correspondingly high investment rounds. Current ARR only matters as a lower bound (with some non-instant deceleration that should be priced in), an ingredient for models of where it's going. Spending is going to mostly follow what can be raised, even if it's out of touch with current levels of ARR, so comparing them is little use. Once growth slows, spending will adjust.

The crucial question is when and where the growth will slow down, but this is extremely uncertain, especially because the levels of LLM capabilities attainable in the short term (and therefore TAM) remain extremely uncertain. For this, the relevant indicators are ARR growth, valuation to ARR ratios, and TAM estimates. Currently the TAM estimates seem to be the closest to indicating the time to a possible slowdown, if capabilities don't significantly improve. For example it would be hard to find much more than $100bn in revenue from programming or legal assistants if they are not truly autonomous (say $20K per year for 5M professionals). And so in 2026-2028 these kinds of markets should start to visibly saturate, at least as far as Anthropic-level 10x YoY ARR growth is concerned, if capabilities don't sufficiently improve.

Reply
ChristianKl's Shortform
Vladimir_Nesov2d87

A Dyson sphere implies the scale of industry that makes any Earth-scale concerns trivial. If the world didn't care about Earth or its inhabitants, they'd be long gone by the time construction of a Dyson sphere is well underway, as a side effect of the smaller projects that came earlier. Conversely, if there was a sufficient level of care for them to persist in some way until that point, then for the same reasons they persisted previously they'd also persist through construction of a Dyson sphere (even if not on the literal Earth with light from the literal Sun).

But also, the scale of industry implied by construction of a Dyson sphere makes it a dubious goal, since the Sun isn't producing energy in a way that's at all sensibly optimized or long term stable. Fusion power plants could be doing similar work better, and there is a whole lot of matter trapped in the Sun that could find much better uses. Star lifting the Sun seems very likely the better aim, even if constructing a Dyson sphere would be an intermediate step, mere scaffolding for the ultimately more worthwhile project of fully dismantling the Sun.

Reply
ryan_greenblatt's Shortform
Vladimir_Nesov3d93

What are your very long timeline expectations, for 2045+ or 2055+ AGI (automated AI R&D, sure)? That's where I expect most of the rare futures with humanity not permanently disempowered to be, though the majority even of these long timelines will still result in permanent disempowerment (or extinction).

I think it takes at least about 10 years to qualitatively transform an active field of technical study or change the social agenda, so 2-3 steps of such change might have a chance of sufficiently reshaping how the world thinks about AI x-risk and what technical tools are available for shaping minds of AGIs, in order to either make a human-initiated lasting Pause plausible, or to have the means of aligning AGIs in an ambitious sense.

Reply2
Is there actually a reason to use the term AGI/ASI anymore?
Answer by Vladimir_NesovSep 10, 2025179

The classical meaning for AGI is things like humans rather than chimps or calculators and the implied potential impact; and for ASI being qualitatively more intelligent and immediately more impactful than humanity. The explosion of discussion around LLMs eroded the terms, but these are still the key thresholds of impact.

AGI is a point where humans are no longer needed for some kind of civilization to keep developing towards superintelligence and further, and superintelligence is a point where humanity's efforts are washed away by its capabilities, and so it's definitely too late to fix anything the superintelligence wouldn't cooperate in fixing.

Drowning in nuance doesn't change the utility of these simple concepts where they centrally apply. But the words might need to change at some point to protect the concepts from erosion of meaning, if it gets too bad.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Vladimir_Nesov3d61

For AGIs and agents, many approximations are interchangeable with the real thing, because they are capable of creating the real thing in the world as a separate construct, or converging to it in behavior. Human decisions for example are noisy and imprecise, but for mathematical or engineering questions it's possible to converge on arbitrary certainty and precision. In a similar way, humans are approximations to superintelligence, even though not themselves superintelligence.

Thus many AGI-approximations may be capable of becoming or creating the real thing eventually. Not everything converges, but the distinction should be about that, not about already being there.

Reply1
peterbarnett's Shortform
Vladimir_Nesov3d20

As in, if you extrapolate what an individual wants, that's basically a world optimized for that individual's selfishness; then there is what groups can agree on by rational negotiation, which is a kind of group selfishness, cutting out everyone who's weak enough

I think it's important to frame values around scopes of optimization, not just coalitions of actors. An individual then wants first of all their own life (rather than the world) optimized for that individual's preferences. If they don't live alone, their home might have multiple stakeholders, and so their home would be subject to group optimization, and so on.

At each step, optimization is primarily about the shared scope, and excludes most details of the smaller scopes under narrower control enclosed within. Culture and "good" would then have a lot to say about the negotiations on how group optimization takes place, but also about how the smaller enclosed scopes within the group's purview are to be relatively left alone to their own optimization, under different preferences of corresponding smaller groups or individuals.

It may be good to not cut out everyone who's too weak to prevent that, as the cultural content defining the rules for doing so is also preference that wants to preserve itself, whatever its origin (such as being culturally developed later than evolution-given psychological drives). And individuals are in particular carriers of culture that's only relevant for group optimization, so group optimization culture would coordinate them into agreement on some things. I think selfishness is salient as a distinct thing only because the cultural content that concerns group optimization needs actual groups to get activated in practice, and without that activation applying selfishness way out of its scope is about as appropriate as stirring soup with a microscope.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Vladimir_Nesov3d31

AI discourse doesn't get enough TsviBT-like vitamins, so their projected toxicity if overdosed is not relevant. A lot of interventions are good in moderation, so arguments about harm from saturation are often counterproductive if taken as a call to any sort of immediately relevant action rather than theoretical notes about hypothetical future conditions.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Vladimir_Nesov3d50

What I meant by general domain is that it's not overly weird in the mental moves that are relevant there, so training methods that can create something that wins IMO are probably not very different from training methods that can create things that solve many other kinds of problems. It's still a bit weird, high school math with olympiad addons is still a somewhat narrow toolkit, but for technical problems of many other kinds the mental move toolkits are not qualitatively different, even if they are larger. The claim is that solving IMO is a qualitatively new milestone from the point of view of this framing, it's evidence about AGI potential of LLMs at the near-current scale in a way that previous results were not.

I agree that there could still be gaps and "generality" of IMO isn't a totalizing magic that prevents existence of crucial remaining gaps. I'm not strongly claiming there aren't any crucial gaps, just that with IMO as an example it's no longer obvious there are any, at least as long as the training methods used for IMO can be adopted to those other areas, which isn't always obviously the case. And of course continual learning could prove extremely hard. But there also isn't strong evidence that it's extremely hard yet, because it wasn't a focus for very long while LLMs at current levels of capabilities were already available. And the capabilities of in-context learning with 50M token contexts and even larger LLMs haven't been observed yet.

So it's a question of calibration. There could always be substantial obstructions such that it's no longer obvious that they are there even though they are. But also at some point there actually aren't any. So always suspecting currently unobservable crucial obstructions is not the right heuristic either, the prediction of when the problem could actually be solved needs to be allowed to respond to some sort of observable evidence.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Vladimir_Nesov3d30

The question for this subthread is the scale of LLMs necessary for first AGIs, what the IMO results say about that. Continual learning through post-training doesn't obviously require more scale, and IMO is an argument about the current scale being almost sufficient. It could be very difficult conceptually/algorithmically to figure out how to actually do continual learning with automated post-training, but that still doesn't need to depend on more scale for the underlying LLM, that's my point about the implications of the IMO results. Before those results, it was far less clear if the current (or near term feasible) scale would be sufficient for the neural net cognitive engine part of the AGI puzzle.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Vladimir_Nesov3d60

The key things from solving IMO-level problems (doesn't matter if it's proper gold or not) is difficulty reasonably close to the limit of human ability in a somewhat general domain, and correctness grading being somewhat vague (natural language proofs, not just answers). Which describes most technical problems, so it's evidence that for most technical problems of various other kinds similar methods of training are not far off from making LLMs capable of solving them, and that LLMs don't need much more scale to make that happen. (Perhaps they need a little bit more scale to solve such problems efficiently, without wasting a lot of parallel compute on failed attempts.)

More difficult problems that take a lot of time to solve (and depend on learning novel specialized ideas) need continual learning to tackle them. Currently only in-context learning is a straightforward way of getting there, by using contexts with millions or tens of millions of tokens of tool-using reasoning traces, equivalent to years of working on a problem for a human. This doesn't work very well, and it's unclear if it will work well enough within the remaining scaling in the near term, with 5 GW training systems and the subsequent slowdown. But it's not ruled out that continual learning can be implemented in some other way, by automatically post-training the model, in which case it's not obvious that there is anything at all left to figure out before LLMs at a scale similar to today's become AGIs.

Reply
Load More
75Permanent Disempowerment is the Baseline
1mo
23
48Low P(x-risk) as the Bailey for Low P(doom)
2mo
29
66Musings on AI Companies of 2025-2026 (Jun 2025)
3mo
4
34Levels of Doom: Eutopia, Disempowerment, Extinction
3mo
0
192Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall
4mo
25
169Short Timelines Don't Devalue Long Horizon Research
Ω
5mo
Ω
24
19Technical Claims
5mo
0
149What o3 Becomes by 2028
9mo
15
41Musings on Text Data Wall (Oct 2024)
1y
2
10Vladimir_Nesov's Shortform
Ω
1y
Ω
136
Load More
Well-being
3 days ago
(+58/-116)
Sycophancy
4 days ago
(-231)
Quantilization
2 years ago
(+13/-12)
Bayesianism
2 years ago
(+1/-2)
Bayesianism
2 years ago
(+7/-9)
Embedded Agency
3 years ago
(-630)
Conservation of Expected Evidence
4 years ago
(+21/-31)
Conservation of Expected Evidence
4 years ago
(+47/-47)
Ivermectin (drug)
4 years ago
(+5/-4)
Correspondence Bias
4 years ago
(+35/-36)
Load More