281

LESSWRONG
LW

280
AI
Frontpage

8

AI Safety: A Climb To Armageddon?

by kmenou
1st Jun 2024
AI Alignment Forum
1 min read
3

8

Ω 3

This is a linkpost for https://arxiv.org/abs/2405.19832
AI
Frontpage

8

Ω 3

AI Safety: A Climb To Armageddon?
2Seth Herd
2Brendan Long
1Søren Elverlin
New Comment
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:15 AM
[-]Seth Herd1y20

This is an excellent point. My own proposed alignment methods are vulnerable to this criticism. And they're most likely to be used unless something changes, afaict. I have worried about this, but not written about it publicly. And it's good to make.it formal and explicit.

The other argument is that people seem to be rushing in headlong even when they don't know of any promising alignment methods at all, sooo...

Reply
[-]Brendan Long1y20

Under certain key assumptions - the inevitability of AI failure

Isn't this just assuming the conclusion?

Reply
[-]Søren Elverlin1y10

From skimming the paper, it appears that the authors have missed that the central AI Safety measure being argued for is pausing/halting. The "Climb to Armageddon" example very poorly matches the safety proposals by Eliezer Yudkowsky.

Reply
Moderation Log
More from kmenou
View more
Curated and popular this week
3Comments

by Herman Cappelen, Josh Dever and John Hawthorne

Abstract: This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsic features of the AI safety landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium Fluctuation. The surprising robustness of the argument forces a re-examination of core assumptions around AI safety and points to several avenues for further research.