When the AI Dam Breaks: From Surveillance to Game Theory in AI Alignment
Pardon the snazzy picture. Part 1: This is an exploration of game theory mechanics as an alternative alignment approach, looking at current AI alignment methods and informed by the work of legal scholars Goldstein and Salib, Turing Award-winner Yoshua Bengio, and the latest research. The September 17, 2025 Report The...
What if one assumption at the core of the danger argument is that one side has attempted to enslave the other.
This is where game theory analyses like those by Goldstein and Salib come in.
https://www.lesswrong.com/posts/mbebDMCgfGg4BzLMf/ai-rights-for-human-safety
We may never know for a fact that AI is "alive" in a way we find epistemologically satisfying, but if we have created something that for all intense and purposes behaves exactly like something that is, the point is moot.
Giving autonomous AI legitimate participation pathways won't solve every permutation of the alignment problem. But it may eliminate the entire class of problems created by attempting control in the first place.
Maybe instead of building cages we should build an economy.