1922

LESSWRONG
LW

1921
Existential riskSecurity MindsetAI
Frontpage

12

[Interview w/ Jeffrey Ladish] Applying the 'security mindset' to AI and x-risk

by fowlertm
11th Apr 2023
1 min read
0

12

Existential riskSecurity MindsetAI
Frontpage

12

New Comment
Moderation Log
More from fowlertm
View more
Curated and popular this week
0Comments

Though I've been following the AI safety debate for a decade or so, I've had relatively few conversations with the relevant experts on the Futurati Podcast. 

Having updated with the release of GPT-4, however, I'm working to change that. 

I recently had a chance to sit down with Jeffrey Ladish to talk about global catastrophic risk, the economic incentives around building goal-directed systems, fragile values, the prospects of being able to predict discontinuities in ability, how far scaling can take us, and more.

Though I imagine most of this will be review for the LW crowd, if you think there's anyone else who would enjoy the conversation, consider sharing it. I'd like to devote more time to AI Safety and x-risk, but I won't do that unless I can see that people are getting value out of it (and I operationalize 'people getting value out of it' with view counts and tweets.)