That's a reasonable concern, but I don't think it's healthy to ruminate too much about it. You made a courageous and virtuous move, and it's impossible to perfectly predict all possible futures from that point onward. If this fails, I presume failure was overdetermined, and your actions wouldn't really matter.
The only mistake you and your team made, in my opinion, was writing the slowdown scenario for AI-2027. While I know that wasn't your intention, a lot of people interpreted it as a 50% chance of 'the US wins global supremacy and achieves utopia,' which just added fuel to the fire ('See, even the biggest doomers think we can win! LFG!!!!').
It also likely hyperstitionized increased suspicion among other leading countries that the US would never negotiate in good faith, making it significantly harder to strike a deal with China and others.
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven't read the fine print. /s
In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an "AGI-soon" world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.
Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological "stone age."
Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy - not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
If we accept all the premises of this scenario, what prescriptive actions might an average individual take in their current position at this point in time?
Some random ideas:
Are there any other recommendations?
If that was the case, wouldn't Scott and Daniel develop the impressive AI-2027 website themselves with the help of AI Agents, instead of utilising your human webdev skills? /jk :D
The answer surely depends mostly on what his impact will be on AI developments, both through his influence on the policy of the new administration and what he does with xAI. While I understand that his political actions might be mind-killing (to say the least) to many of his former fans, I would much prefer a scenario where Elon has infuriating politics but a positive impact on solving alignment over one with the opposite outcome.
A new open-source model has been announced by the Chinese lab DeepSeek: DeepSeek-V3. It reportedly outperforms both Sonnet 3.5 and GPT-4o on most tasks and is almost certainly the most capable fully open-source model to date.
Beyond the implications of open-sourcing a model of this caliber, I was surprised to learn that they trained it using only 2,000 H800 GPUs! This suggests that, with an exceptionally competent team of researchers, it’s possible to overcome computational limitations.
Here are two potential implications:
Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:
When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran, or his elephant caravans tramp through perfumed jungles in Kled, where forgotten palaces with veined ivory columns sleep lovely and unbroken under the moon.
Btw, have you heard about PropheticAI? They are working on device that is supposed to help you with lucid dreaming?
Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow?
Part where I am confused is why is this scenario considered as distinct over the standard ASI misalignment problem? A superintelligence that economically destroys and subjugates every country except ,perhaps, the country where it is based in is pretty close to the standard paperclip outcome right?
Whether I am turned into paperclips or completely enslaved by US-based superintelligence is rather trivial difference IMO and I think it could be treated as another variant of alignment failure.