In the discussion about AI safety, the central issue is the rivalry between the US and China. However, when AI is used for censorship and propaganda, robots serve as police, the differences in political regimes become almost indistinguishable. There's no point in waging war when everyone can be brought to the same dystopia.
You talk about two common mistakes in judging risks: taking big risks with bad odds (breaking the “fold pre principle”) or playing it too safe with good odds (breaking the “pocket ace principle”).
I totally agree that sunk cost is a big problem (it’s a common thinking trap), but you can’t just boil it down to “fold pre” or “pocket ace” principles. It's oversimplification.
When you say you often see people playing bad hands, that’s just your view. Let me tweak a saying: “Don’t judge someone until you’ve played their cards.”
In today’s digital world, going for a “pocket ace” strategy, like picking an obvious, “strong” business or idea -throws you into crazy competition. Everyone’s chasing those aces, and it’s super expensive to even get in the game. But now, with tech being so cheap and accessible, you can take a small risk and play something like suited connectors. Those “weaker” ideas might just turn into a royal flush - what startups call a unicorn! So, I’d say young people should play their “bad hands” and go for those wild, less obvious bets. Haha!
Solely as a hypothesis, one could imagine that around the age of 5–6, a kind of integration of an "adult self" is supposed to occur. Children are indeed very diverse (rapid shifts in interests, character, and leaps in intellectual abilities are all normal for a child). This could likely be linked to neuroplasticity. However, trauma, often from a parent, may prevent this process of merging or selecting a primary version of the self from completing, leading to this multi-agent state in the form of DID.
Humans totally switch up depending on the situation: one minute you’re a son, then a dad, then a big-shot company boss, or just some dude stressing to get a doctor’s appointment. It’s wild to see how people turn into someone else when they’re head-over-heels or super jealous—like, is that even the same person? But if you’ve got a solid core "you," those big swings kinda smooth out. It’s a cool thing to think about, and it’s weird that now, when we see LLMs acting different based on prompts, this topic isn’t blowing up again.
While reading this essay, I had several questions:
It’s deeply unsettling, but I believe humans, especially when their interests are at stake, have a limited capacity for logical debate as described in the article. In 2025, this feels even more tragic and evident to me. I speculate that we’re heading toward darker times because, within the next 1-2 years, people will be heavily influenced by their personal AI companions. These AIs might engage in logical debate with users, but their superior knowledge and persuasive abilities could leave humans either silently agreeing or, in the case of Luddites, rejecting them outright. Instead of mass debates among people, we might see debates between AIs like Grok, ChatGPT, Claude, or Gemini. This prospect is frightening.
The main barrier to ASI is that it will provide a new view on human history, destroy minimum half of narratives about who started wars and why, reveal details of genocides, and open all closed black boxes! Actually, the Epstein lists "hoax" or Operation Paperclip or franc CFA would be impossible in the era of ASI. Given who finances and regulates AI, we will never see ASI. Never. Yes, it's just speculation. But I haven't seen this argument in discussions.