Yeah it goes out of its way to say the opposite, but if you know Nate and Eliezer the book gives the impression that their pdooms are still extremely high, and responding to the author's beliefs even when those aren't exactly the same as the text is sometimes correct, although not really in this case.
If you have a lump of 7,000 neurons, they can each connect to each other neuron, and you can spherical-cow approximate that as a 7000x7000 matrix multiplication. That matrix multiplication will all happen within O(1) spikes, 1/100 of a second. That's ~700GFlop. An H100 GPU takes ~1 millisecond to do that operation, or 1M cycles, to approximate one brain spike cycle! And the gpu has 70B or whatever transistors, so it's more like 10M transistors per neuron!
Note that since Paul started working for the US government a few years ago, he has withdrawn from public discussion of AI safety to avoid PR and conflict of interests, so going off his writings are significantly behind his current beliefs.
YC batches have grown 3x since 2016. I expect a significant market saturation / low hanging fruit effect, reducing the customer base of each startup compared to when there were only 200/year.
I'm surprised that's the question. I would guess that's not what Eliezer means because he says Dath Ilan is responding sufficiently to AI risk but also hints at Dath Ilan still spending a significant fraction of its resources on AI safety (I've only read a fraction of the work here, maybe wrong). I have a background belief that the largest problems don't change that much, and it's rare for problems to go from #1 problem to not-in-top-10 problems, and that most things have diminishing returns such that it's not worthwhile to solve them so thoroughly. An alternative definition that's spiritually similar that I like more is; "What policy could governments implement such that the improving the AI x-risk policy would now not be the #1 priority, if the governments were wise.". This isolates AI / puts it in context of other global problems, such that the AI solution doesn't need to prevent governments from changing their minds over the next 100 years or whatever needs to happen for the next 100 years to go well.
I would expect aerodynamic maneuvering MIRVS to work and not be prohibitively expensive. The closest deployed version appears to be https://en.wikipedia.org/wiki/Pershing_II which has 4 large fins. You likely don't need that much steering force
I really struggle to think of problems you want to wait 2.5 years to solve - when you identify a problem, you usually want to work on solving it within the month. Just update most of the way now + a tiny bit over time as evidence comes in. As others commented, no doom by 2028 is very little evidence
I heard some rumors that gpt 4.5 got good pretraining loss but bad downstream performance. If that's true the loss scaling laws may have worked correctly. If not, yeah a lot of things can go wrong and something did, whether that's hardware issues, software bugs, or machine learning problems or problems with their earlier experiments
This is OpenAI cot style. See it in the original o1 blog post. https://openai.com/index/learning-to-reason-with-llms/
I support a magically enforced 10+ year AGI ban. It's hard for me to concretely imagine a ban enforced by governments, because it's hard to disentangle what that counterfactual government would be like, but I support a good government enforced AGI slowdown. I do like it when people shout doom from the rooftops though, because it's better for my beliefs to be closer to global average average, and the global discourse is extremely far from overshooting doominess.