LESSWRONG
LW

657
David Abecassis
22110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Implications of the inference scaling paradigm for AI safety
David Abecassis10mo74

Great post!

My own timelines shortened with the maturation of inference scaling, as previous to this we projected progress as a result of training continuing to scale plus an uncertainty about whether and how soon we'd see other algorithmic or scaffolding improvements. But now here we are with clear impacts from inference scaling, and that's worth an update.

I agree with most of your analysis in deployment overhang, except I'd eyeball the relative scale of training compute usage to inference compute usage as implying that we could still see speed as a major factor if a leading lab switches from utilizing its compute for development to inference, and this could still see us facing a large number of very fast AIs. Maybe this now requires the kind of compute you have access to as a lab or major government rather than a rogue AI or a terrorist group, but the former has been what I'm worried about for loss of control all along so this doesn't change my view of the risk. Also, as you point out, per-token inference costs are always falling rapidly.

Reply
17Refining MAIM: Identifying Changes Required to Meet Conditions for Deterrence
7mo
0