In the discussion about AI safety, the central issue is the rivalry between the US and China. However, when AI is used for censorship and propaganda, robots serve as police, the differences in political regimes become almost indistinguishable. There's no point in waging war when everyone can be brought to the same dystopia.
You talk about two common mistakes in judging risks: taking big risks with bad odds (breaking the “fold pre principle”) or playing it too safe with good odds (breaking the “pocket ace principle”).
Solely as a hypothesis, one could imagine that around the age of 5–6, a kind of integration of an "adult self" is supposed to occur. Children are indeed very diverse (rapid shifts in interests, character, and leaps in intellectual abilities are all normal for a child). This could likely be linked to neuroplasticity. However, trauma, often from a parent, may prevent this process of merging or selecting a primary version of the self from completing, leading to this multi-agent state in the form of DID.
Humans totally switch up depending on the situation: one minute you’re a son, then a dad, then a big-shot company boss, or just some dude stressing to get... (read more)
While reading this essay, I had several questions:
It’s deeply unsettling, but I believe humans, especially when their interests are at stake, have a limited capacity for logical debate as described in the article. In 2025, this feels even more tragic and evident to me. I speculate that we’re heading toward darker times because, within the next 1-2 years, people will be heavily influenced by their personal AI companions. These AIs might engage in logical debate with users, but their superior knowledge and persuasive abilities could leave humans either silently agreeing or, in the case of Luddites, rejecting them outright. Instead of mass debates among people, we might see debates between AIs like Grok, ChatGPT, Claude, or Gemini. This prospect is frightening.
The main barrier to ASI is that it will provide a new view on human history, destroy minimum half of narratives about who started wars and why, reveal details of genocides, and open all closed black boxes! Actually, the Epstein lists "hoax" or Operation Paperclip or franc CFA would be impossible in the era of ASI. Given who finances and regulates AI, we will never see ASI. Never. Yes, it's just speculation. But I haven't seen this argument in discussions.