AI forecasting & strategy at AI Impacts. Blog: Not Optional.
My best LW posts are Framing AI strategy and Slowing AI: Foundations. My most useful LW posts are probably AI policy ideas: Reading list and Ideas for AI labs: Reading list.
See also the new The Illusion of China’s AI Prowess: Regulating AI Will Not Set America Back in the Technology Race from Helen Toner, Jenny Xiao, and Jeffrey Ding.
To briefly mention one way your skepticism proves too much (or has hidden assumptions?): clearly sufficiently strong capability evals, run during training runs, enforced by governments monitoring training runs, would ~suffice to prevent dangerous training runs.
Assuming you mean the second 42 ("AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood")-- I also don't know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
I think 19 ideas got >90% agreement.
I agree the top ideas overlap. I think reasons one might support some over others depend on the details.
Briefly: with arbitrarily good methods, we could train human-level AI with very little hardware. Assertions about hardware are only relevant in the context of the relevant level of algorithmic progress.
Or: nothing depends on whether sufficient hardware for human-level AI already exists given arbitrarily good methods.
(Also note that what's relevant for forecasting or decisionmaking is facts about how much hardware is being used and how much a lab could use if it wanted, not the global supply of hardware.)
I am very glad you did this because in worlds where survey results look like this, I think it's good and important to make that easily legible to AI safety community outsiders. [Edit: and good and important to set a good example for other labs.]
Good post. You changed my mind moderately (and somehow did so without presenting many new-to-me facts/arguments, but perhaps by combatting an irrational sense of it couldn't happen here [edit: or more charitably to myself, perhaps just by causing me to think about it]). Focusing on America, thinking about it, I have credence ~40% in a Republican trifecta in 2025 and credence ~30% that Republicans do wild bad stuff in 2025–2026 (mostly from Republican-trifecta worlds but also some from Republican-president-but-no-trifecta worlds). I probably shouldn't try to do anything about this but it's worth Noticing.