In an ideal world, well meaning regulation coming from EU could become a global standard and really make a difference. However, in reality, I see little value in EU-specific regulations like these. They are unlikely to impact frontier AI companies such as OpenAI, Anthropic, Google DeepMind, xAI, and DeepSeek, all of which are based outside the EU. These firms might accept the cost of exiting the EU market if regulations become too burdensome.
While the EU market is significant, in a fast-takeoff, winner-takes-all AI race (as outlined in the AI-2027 forecast), market access alone may not sway these companies’ safety policies. Worse, such regulations could backfire, locking the EU out of advanced AI models and crippling its competitiveness. This could deter other nations from adopting similar rules, further isolating the EU.
As an EU citizen, I view the game theory in an "AGI-soon" world as follows:
Alignment Hard
EU imposes strict AI regulations → Frontier companies exit the EU or withhold their latest models, continuing the AI race → Unaligned AI emerges, potentially catastrophic for all, including Europeans. Regulations prove futile.
Alignment Easy
EU imposes strict AI regulations → Frontier companies exit the EU, continuing the AI race → Aligned AI creates a utopia elsewhere (e.g., the US), while the EU lags, stuck in a technological "stone age."
Both scenarios are grim for Europe.
I could be mistaken, but the current US administration and leaders of top AI labs seem fully committed to a cutthroat AGI race, as articulated in situational awareness narratives. They appear prepared to go to extraordinary lengths to maintain supremacy, undeterred by EU demands. Their primary constraints are compute and, soon, energy - not money! If AI becomes a national security priority, access to near-infinite resources could render EU market losses a minor inconvenience. Notably, the comprehensive AI-2027 forecast barely mentions Europe, underscoring its diminishing relevance.
For the EU to remain significant, I see two viable strategies:
If we accept all the premises of this scenario, what prescriptive actions might an average individual take in their current position at this point in time?
Some random ideas:
Are there any other recommendations?
If that was the case, wouldn't Scott and Daniel develop the impressive AI-2027 website themselves with the help of AI Agents, instead of utilising your human webdev skills? /jk :D
The answer surely depends mostly on what his impact will be on AI developments, both through his influence on the policy of the new administration and what he does with xAI. While I understand that his political actions might be mind-killing (to say the least) to many of his former fans, I would much prefer a scenario where Elon has infuriating politics but a positive impact on solving alignment over one with the opposite outcome.
A new open-source model has been announced by the Chinese lab DeepSeek: DeepSeek-V3. It reportedly outperforms both Sonnet 3.5 and GPT-4o on most tasks and is almost certainly the most capable fully open-source model to date.
Beyond the implications of open-sourcing a model of this caliber, I was surprised to learn that they trained it using only 2,000 H800 GPUs! This suggests that, with an exceptionally competent team of researchers, it’s possible to overcome computational limitations.
Here are two potential implications:
Perhaps Randolph Carter was right about losing access to dreamlands after your twenties:
When Randolph Carter was thirty he lost the key of the gate of dreams. Prior to that time he had made up for the prosiness of life by nightly excursions to strange and ancient cities beyond space, and lovely, unbelievable garden lands across ethereal seas; but as middle age hardened upon him he felt these liberties slipping away little by little, until at last he was cut off altogether. No more could his galleys sail up the river Oukranos past the gilded spires of Thran, or his elephant caravans tramp through perfumed jungles in Kled, where forgotten palaces with veined ivory columns sleep lovely and unbroken under the moon.
Btw, have you heard about PropheticAI? They are working on device that is supposed to help you with lucid dreaming?
Still think it will be hard to defend against determined and competent adversaries committed to sabotaging the collective epistemic. I wonder if prediction markets can be utilised somehow?
I am not sure if dotcom 2000 market crash is the best way to describe a "fizzle". The upcoming Internet Revolution at the time was a correct hypothesis its just that 1999 startups were slightly ahead of time and tech fundamentals were not ready yet to support it, so market was forced to correct the expectations. Once the tech fundamentals (internet speeds, software stacks, web infrastructure, number of people online, online payments, online ad business models etc...) became ready in mid 2000s the Web 2.0 revolution happened and tech companies became giants we know today.
I expect most of the current AI startups and business models will fail and we will see plenty of market corrections, but this will be orthogonal to ground truth about AI discoveries that will happen only in a few cutting edge labs which will be shielded from temporary market corrections.
But coming back to the object level question: I really don't have a specific backup plan, I expect even the non-AGI level AI based on the advancement of the current models will significantly impact various industries so will stick to software engineering for forceable future.
My dark horse bet is on 3d country trying desperately to catch up to US/China just when they will be close to reaching agreement on slowing down progress. Most likely: France.
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven't read the fine print. /s