Anyone tried clawdbot yet? Tried it today after hearing a lot of hype on X and it really gives me the vibe of the drop-in virtual remote worker from Situational Awareness. It is basically open source agent that takes full control over your local computer environment and communicates with you in human like fashion over WhatsApp, Telegram, Slack. I know that there were a lot of attempts to try this kind of agents before but this is the first time that really seem to work.
Not sure what implications of this kind of tools are for AI timelines, but it really does seem like one of the crucial pieces of the fully automated economy has just been deployed.
In my opinion, it doesn't make rational sense for them to invade at all. Even in the best-case scenario for China, where they manage to pacify Taiwan after a tough fight, I would still expect the following:
1) They would be permanently shut out of all Western trade and technology sharing.
2) All critical semiconductor manufacturing in Taiwan will be destroyed by the US or the local Taiwanese military before China can get to it, and most of it is already in the process of being successfully transferred to the US. I also expect that most of the human talent would be taken to the US.
3) Even if the US did not directly intervene, the US and their allies would start massive rearmament and reindustrialisation programmes and maximally utilise their advantage in AI and other critical technologies in future.
4) Regarding point 4, if American AI victory is inevitable due to their computing advantage, China might still get a better deal in the current scenario, where it is perceived as merely an economic competitor and geopolitical challenger, rather than a direct adversary, as it would be in the event of an invasion of Taiwan.
There are also some indications that Taiwanese politics are slowly moving in a pro-China direction, with increased support for peaceful re-unification among younger KMT voters, which might also incentivise China to bide its time and avoid doing anything reckless.
Beautiful! Even though I am twice your age, I feel very similarly. The only difference is that I think I was a bit luckier to have experienced some of life's highlights in the Eld world, which is permanently coming to a close.
We'll get through this, brother.
If I understood Eliezers argument correctly we can shorten those timescales buy improving human intelligences through methods like genetic engineering. Once majority of humans have Von Neumann level IQ I think its fine to let them decide how to proceed on AI research. Question is, how fast can this happen, and it probably would take a century or 2 at least.
Part where I am confused is why is this scenario considered as distinct over the standard ASI misalignment problem? A superintelligence that economically destroys and subjugates every country except ,perhaps, the country where it is based in is pretty close to the standard paperclip outcome right?
Whether I am turned into paperclips or completely enslaved by US-based superintelligence is rather trivial difference IMO and I think it could be treated as another variant of alignment failure.
That's a reasonable concern, but I don't think it's healthy to ruminate too much about it. You made a courageous and virtuous move, and it's impossible to perfectly predict all possible futures from that point onward. If this fails, I presume failure was overdetermined, and your actions wouldn't really matter.
The only mistake you and your team made, in my opinion, was writing the slowdown scenario for AI-2027. While I know that wasn't your intention, a lot of people interpreted it as a 50% chance of 'the US wins global supremacy and achieves utopia,' which just added fuel to the fire ('See, even the biggest doomers think we can win! LFG!!!!').
It also likely hyperstitionized increased suspicion among other leading countries that the US would never negotiate in good faith, making it significantly harder to strike a deal with China and others.
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven't read the fine print. /s
In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an "AGI-soon" world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.
Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological "stone age."
Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy - not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
If we accept all the premises of this scenario, what prescriptive actions might an average individual take in their current position at this point in time?
Some random ideas:
Are there any other recommendations?
Indeed. Also, take a look at the recent hype around the Clawdbot/Moldbot agent. Basically, every tech influencer is now rushing to give Claude access to their entire computer. By 2027, most prominent tech figures may already have swarms of agents managing their entire digital life and businesses.