LESSWRONGTags
LW

Multipolar Scenarios

•
Applied to Capabilities and alignment of LLM cognitive architectures by Seth Herd 2mo ago
•
Applied to AI x-risk, approximately ordered by embarrassment by Alex Lawsen 2mo ago
•
Applied to Agentized LLMs will change the alignment landscape by Seth Herd 2mo ago
•
Applied to The Alignment Problems by Martín Soto 5mo ago
•
Applied to Alignment is not enough by Alan Chan 5mo ago
•
Applied to Institutions Cannot Restrain Dark-Triad AI Exploitation by Remmelt 6mo ago
•
Applied to Nine Points of Collective Insanity by Remmelt 6mo ago
•
Applied to Trajectories to 2036 by ukc10014 8mo ago
•
Applied to How would two superintelligent AIs interact, if they are unaligned with each other? by RobertM 10mo ago
•
Applied to What Failure Looks Like: Distilling the Discussion by Raemon 1y ago
•
Applied to Equilibrium and prior selection problems in multipolar deployment by Raemon 1y ago
•
Applied to Superintelligence 17: Multipolar scenarios by Raemon 1y ago
•
Applied to In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs? by Raemon 1y ago
•
Applied to What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by Raemon 1y ago
•
Applied to Why multi-agent safety is important by Raemon 1y ago
Raemon v1.0.0Jun 14th 2022 (+135) 2

A multipolar scenario is one where no single AI or agent takes over the world.

Featured in the book "Superintelligence" by Nick Bostrom.

•
Created by Raemon at 1y