LESSWRONG
LW

2241
davekasten
1575Ω251660
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2davekasten's Shortform
1y
96
No wikitag contributions to display.
Buck's Shortform
davekasten3d40

My opinion, FWIW, is that both treaty and international agreement (or "deal", etc.) have upsides and downsides.  And it's hard to predict those considerations' political salience or direction in the long term -- e.g., just a few years ago, Republicans' main complaint against the JCPOA (aka "the Iran Nuclear Deal") was that it wasn't an actual treaty, and should have been, which would be a very odd argument in 2025. 

I think as long as MIRI says things like "or other international agreement or set of customary norms" on occasion it should be fine.  It certainly doesn't nails on the chalkboard me to hear "treaty" on a first glance, and in any long convo I model MIRI as saying something like "or look, we'd be open to other things that get this done too, we think a treaty is preferable but are open to something else that solves the same problem."
 

Reply
The title is reasonable
davekasten8d50

The big challenge here is getting national security officials to respond to your survey!  Probably easier with former officials, but unclear how much that's predictive of current officials' beliefs. 

Reply
The title is reasonable
davekasten11d60

I'm pretty sure that p(doom) is much more load-bearing for this community than policymakers generally. And frankly, I'm like this close to commissioning a poll of US national security officials where we straight up ask "at percent X of total human extinction would you support measures A, B, C, D, etc."

I strongly, strongly, strongly suspect based on general DC pattern recognition that if the US government genuinely belived that the AI companies had a 25% chance of killing us all, FBI agents would rain out of the sky like a hot summer thunderstorm, sudden, brilliant, and devastating. 

Reply1
davekasten's Shortform
davekasten13d17543

Heads up -- if you're 1. on a H1-B visa AND 2. currently outside the US, there is VERY IMPORTANT, EXTREMELY TIME SENSITIVE stuff going on that might prevent you from getting back into the US after 21 September.  

If this applies to you, immediately stop looking at LessWrong and look at the latest news.  (I'm not providing a summary of it here because there are conflicting stories about who it will apply to and it's evolving hour by hour and I don't want this post to be out of date)

Reply1
How To Dress To Improve Your Epistemics
davekasten15d91

Ivy Style Any% Speedrun Complete

Reply
davekasten's Shortform
davekasten18d30

If you're someone who has[1], or will have, read If Anyone Builds It, Everyone Dies, I encourage you to post your sincere and honest review of the book on Amazon once you have read it -- I think it would be useful to the book's overall reputation. 

But be a rationalist!  Give your honest opinion. 

When:

If you've already read it: Once Amazon accepts reviews, likely starting on the book launch date tomorrow.

If you haven't read it: Once you've read it.  Especially if you've ordered a copy from Amazon so they know the review is coming from a verified purchaser of the book.

  1. ^

    Advance reader copies. 

Reply
chanamessinger's Shortform
davekasten24d4718

I also think this is likely to cause folks to look into the situation and ask, "is it really this bad?"  I think it's helpful to point them to the fact that yes, Yudkowsky and Soares are accurately reporting that the AI CEOs think they're roughly russian-roulette odds gambling with the world [1].  I also think it's important to emphasize that a bunch of us have a bunch of disagreements, whether nuanced or blunt, with them, and still are worried.  

Why? Because lots of folks live in denial that it's even possible for AI as smart as humans to exist one day, much less superintelligent AI soon.  Often their defense mechanism is to pick at bits of the story.  Reinforcing that even if you pick at bits of the story you still are worried is a helpful thing.

[1] Not trying to pick round ninety zillion of the fight about whether this is a good or bad idea, etc.!  

Reply4
Anthropic's leading researchers acted as moderate accelerationists
davekasten1mo52

I honestly haven't thought especially in depth or meaningfully about the LTBT and this is zero percent a claim about the LTBT, but as someone who has written a decent number of powerpoint decks that went to boards and used to be a management consultant and corporate strategy team member, I would generally be dissatisfied with the claim that a board's most relevant metric is how many seats it currently has filled (so long as it has enough filled to meet quorum).  

As just one example, it is genuinely way easier than you think for a board to have a giant binder full of "people we can emergency appoint to the board, if we really gotta" and be choosing not to exercise that binder because, conditional on no-emergency, they genuinely and correctly prefer waiting for someone being appointed to the board who has an annoying conflict that they're in the process of resolving (e.g., selling off shares in a competitor or waiting out a post-government-employment "quiet period" or similar).
 

Reply11
davekasten's Shortform
davekasten1mo201

I'm about to embark on the classic exercise of "think a bunch about AI policy."

Does anyone actually have an up to date collection of "here are all the existing AI safety policy proposals out there"?

(Yes, I know, your existing proposal is already great and we should just implement it as-is.  Think of the goal of this exercise being to convince someone else who needs to see a spreadsheet of "here are all the ideas, here is why idea number three is the best one")

Reply
Lessons from the Iraq War for AI policy
davekasten3mo71

I think this is somewhat true, but also think in Washington it's also about becoming known as "someone to go talk to about this" whether or not they're your ally.  Being helpful and genial and hosting good happy hours is surprisingly influential.

Reply
Load More
43[Cross-post] Every Bay Area "Walled Compound"
8mo
3
14[Cross-post] Welcome to the Essay Meta
9mo
2
49Dave Kasten's AGI-by-2027 vignette
10mo
8
74A Narrow Path: a plan to deal with AI extinction risk
Ω
1y
Ω
12
12[Cross-post] Book Review: Bureaucracy, by James Q Wilson
1y
0
2davekasten's Shortform
1y
96