The big challenge here is getting national security officials to respond to your survey! Probably easier with former officials, but unclear how much that's predictive of current officials' beliefs.
I'm pretty sure that p(doom) is much more load-bearing for this community than policymakers generally. And frankly, I'm like this close to commissioning a poll of US national security officials where we straight up ask "at percent X of total human extinction would you support measures A, B, C, D, etc."
I strongly, strongly, strongly suspect based on general DC pattern recognition that if the US government genuinely belived that the AI companies had a 25% chance of killing us all, FBI agents would rain out of the sky like a hot summer thunderstorm, sudden, brilliant, and devastating.
Heads up -- if you're 1. on a H1-B visa AND 2. currently outside the US, there is VERY IMPORTANT, EXTREMELY TIME SENSITIVE stuff going on that might prevent you from getting back into the US after 21 September.
If this applies to you, immediately stop looking at LessWrong and look at the latest news. (I'm not providing a summary of it here because there are conflicting stories about who it will apply to and it's evolving hour by hour and I don't want this post to be out of date)
Ivy Style Any% Speedrun Complete
If you're someone who has[1], or will have, read If Anyone Builds It, Everyone Dies, I encourage you to post your sincere and honest review of the book on Amazon once you have read it -- I think it would be useful to the book's overall reputation.
But be a rationalist! Give your honest opinion.
When:
If you've already read it: Once Amazon accepts reviews, likely starting on the book launch date tomorrow.
If you haven't read it: Once you've read it. Especially if you've ordered a copy from Amazon so they know the review is coming from a verified purchaser of the book.
Advance reader copies.
I also think this is likely to cause folks to look into the situation and ask, "is it really this bad?" I think it's helpful to point them to the fact that yes, Yudkowsky and Soares are accurately reporting that the AI CEOs think they're roughly russian-roulette odds gambling with the world [1]. I also think it's important to emphasize that a bunch of us have a bunch of disagreements, whether nuanced or blunt, with them, and still are worried.
Why? Because lots of folks live in denial that it's even possible for AI as smart as humans to exist one day, much less superintelligent AI soon. Often their defense mechanism is to pick at bits of the story. Reinforcing that even if you pick at bits of the story you still are worried is a helpful thing.
[1] Not trying to pick round ninety zillion of the fight about whether this is a good or bad idea, etc.!
I honestly haven't thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board's most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).
As just one example, it is genuinely way easier than you think for a board to have a giant binder full of "people we can emergency appoint to the board, if we really gotta" and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they're in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment "quiet period" or similar).
I'm about to embark on the classic exercise of "think a bunch about AI policy."
Does anyone actually have an up to date collection of "here are all the existing AI safety policy proposals out there"?
(Yes, I know, your existing proposal is already great and we should just implement it as-is. Think of the goal of this exercise being to convince someone else who needs to see a spreadsheet of "here are all the ideas, here is why idea number three is the best one")
I think this is somewhat true, but also think in Washington it's also about becoming known as "someone to go talk to about this" whether or not they're your ally. Being helpful and genial and hosting good happy hours is surprisingly influential.
My opinion, FWIW, is that both treaty and international agreement (or "deal", etc.) have upsides and downsides. And it's hard to predict those considerations' political salience or direction in the long term -- e.g., just a few years ago, Republicans' main complaint against the JCPOA (aka "the Iran Nuclear Deal") was that it wasn't an actual treaty, and should have been, which would be a very odd argument in 2025.
I think as long as MIRI says things like "or other international agreement or set of customary norms" on occasion it should be fine. It certainly doesn't nails on the chalkboard me to hear "treaty" on a first glance, and in any long convo I model MIRI as saying something like "or look, we'd be open to other things that get this done too, we think a treaty is preferable but are open to something else that solves the same problem."