YC batches have grown 3x since 2016. I expect a significant market saturation / low hanging fruit effect, reducing the customer base of each startup compared to when there were only 200/year.
I'm surprised that's the question. I would guess that's not what Eliezer means because he says Dath Ilan is responding sufficiently to AI risk but also hints at Dath Ilan still spending a significant fraction of its resources on AI safety (I've only read a fraction of the work here, maybe wrong). I have a background belief that the largest problems don't change that much, and it's rare for problems to go from #1 problem to not-in-top-10 problems, and that most things have diminishing returns such that it's not worthwhile to solve them so thoroughly. An alternative definition that's spiritually similar that I like more is; "What policy could governments implement such that the improving the AI x-risk policy would now not be the #1 priority, if the governments were wise.". This isolates AI / puts it in context of other global problems, such that the AI solution doesn't need to prevent governments from changing their minds over the next 100 years or whatever needs to happen for the next 100 years to go well.
I would expect aerodynamic maneuvering MIRVS to work and not be prohibitively expensive. The closest deployed version appears to be https://en.wikipedia.org/wiki/Pershing_II which has 4 large fins. You likely don't need that much steering force
I really struggle to think of problems you want to wait 2.5 years to solve - when you identify a problem, you usually want to work on solving it within the month. Just update most of the way now + a tiny bit over time as evidence comes in. As others commented, no doom by 2028 is very little evidence
I heard some rumors that gpt 4.5 got good pretraining loss but bad downstream performance. If that's true the loss scaling laws may have worked correctly. If not, yeah a lot of things can go wrong and something did, whether that's hardware issues, software bugs, or machine learning problems or problems with their earlier experiments
This is OpenAI cot style. See it in the original o1 blog post. https://openai.com/index/learning-to-reason-with-llms/
I can imagine scenarios where you could end up with more resources from causing vacuum decay without extortion. Like if you care about doing something with resources quickly and other agents want to use resources slowly, then if you cause vacuum decay inside your region, the non collapsed shell of your region becomes more valuable to you relative to other agents because it only exists for a short duration, and maybe that makes other agents fight over it less. Or maybe you can vacuum decay into a state that still supports life and you value that
Whether you can cause various destructive chain reactions is pretty important. If locusts could benefit from causing vacuum collapse, or could trigger star supernova, or could efficiently collapse various bodies into black holes, that could easily eat up large fractions of the universe.
No, AC actually moves 2-3x as much heat as it's input power, so a 1500W AC will extract an additional 3000W from inside and dump 4500W outside
Note that since Paul started working for the US government a few years ago, he has withdrawn from public discussion of AI safety to avoid PR and conflict of interests, so going off his writings are significantly behind his current beliefs.