AI forecasting & strategy at AI Impacts. Blog: Not Optional.
My best LW post is Framing AI strategy.
The dataset is public and includes a question "how long have you worked in" the "AI research area [you have] worked in for the longest time," so you could check something related!
Yeah, these seem like useful concepts in some contexts too.
I don't understand this sentence:
"Post-AGI compute overhang" here describes the gap between compute used to build AGI in the first place vs. more efficient designs that AI-aided progress could quickly discover.
It's the gap between the training compute of 'the first AGI' and what?
I largely agree. But I think not-stacking is only slightly bad because I think the "crappy toy model [where] every alignment-visionary's vision would ultimately succeed, but only after 30 years of study along their particular path" is importantly wrong; I think many new visions have a decent chance of succeeding more quickly and if we pursue enough different visions we get a good chance of at least one paying off quickly.
Edit: even if alignment researchers could stack into just a couple paths, I think we might well still choose to go wide.
Combating bad regulation would be the obvious way.
In seriousness, I haven’t focused on interventions to improve regulation yet— I just noticed a thing about public opinion and wrote it. (And again, some possible regulations would be good.)
But please consider, are you calling for regulation because it actually makes sense, or because it's the Approved Answer to problems?
I didn't call for regulation.
Some possible regulations would be good and some would be bad.
I do endorse trying to nudge regulation to be better than the default.
I expect to update this comment with additional sources—and perhaps new analytic frames—as I become aware of them and they become public. Last updated 21 March 2023.
Affordances:
Intermediate goals:
Theories of victory:
Memes & frames:
Leverage:
If I was rewriting this post today, I would probably discuss something like object-level frames or strategic perspectives. They make aspects of a situation more salient; whether or not they’re true, and whether or not they’re the kind-of-thing that can be true, they can be useful. See Matthijs Maas's Strategic Perspectives on Transformative AI Governance for illustration.
This largely feels true.
But if someone is disposed to believe P because of strong argument X, the existence of weak arguments for X doesn't feel like it dissuades them.
There's a related phenomenon where--separate from what people believe and how they evaluate arguments--your adversaries will draw attention to the most objectionable things you say, and a movement's adversaries will draw attention to the most objectionable things a member of the movement says.
I roughly support slowing AI progress (although the space of possibilities has way more dimensions than just slow vs fast). Some takes on "Reasons one might try to accelerate progress":
Interesting, thanks.
(I agree in part, but (1) planning for far/slow worlds is still useful and (2) I meant more like metrics or model evaluations are part of an intervention, e.g. incorporated into safety standards than metrics inform what we try to do.)
AI strategy research
projectsproject generatorspromptsMostly for personal use. Likely to be expanded over time.
Some prompts inspired by Framing AI strategy:
For forecasting prompts, see List of uncertainties about the future of AI.
Some miscellaneous prompts: